var/home/core/zuul-output/0000755000175000017500000000000015136772715014543 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136776214015505 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000132227215136776046020276 0ustar corecore&{ikubelet.log]ko8~~/KjQl.Nv4[-67J̯?/)qڴt2g;$>H꽓i9 ܑn2Jt5ףb[;cu$k9q榦U?" {~b8(%%-%/ʬ%mF3QƪTKތ*ZQmF #d#9_<+9g;bO]GSFK)\f"m%mC0 _PGsQ Q/ςJ.ʧ*bk̝=>B7 >V3.h 2qLp# DN.hvEf2A3V;tF%}K OFa|QC^J#Y u#<цQ|#L[QH6x?LQg]lٽbui###"0Hm'ʎUo8N Vl6o\ܰ$u/Xa/qwz/ӫi(S[-EC+:!w(2zٛ_D}"o\dpC7lkR|ˡF{!Dw_l㌠FFd"Dfmt+#pcb2|EYmy2R|iu*,+E89XHFM봤UIy1YNt1L­8B -(|l%-,vV2%VvZH&0q.{J{2>",wrpSP^DIV?MQ9e{[r[yCpBD+qb{,m(*:/\Pɟ}tQT&W5a~vyeV^?Ȼ=~rzSv-?5+"8eo"Lu¯`w+ڹK'߆TkD%f OlzUe$*#W1^b GĴc-wC(B! WB pnPtnSr@~[:? QrXA;сM`}rҕʞq]B>f6e 0Y?QI {N4?DDgs>3zS*nq4b{="32Q`쎖O{3jde*S>0U QGlrHPd;ꔀjP&Vb7x"2f*Zu:ǹ!A'7`CNj/z9'~0b8 C*!0F y)}M1ϻ'Nfe.Nw'dEw+Nw 1RaHe)k}`j[m D`27M܁+N0eʬ?Fm&!ne+3ċ!l#b_$`!/wxn`x%~I|DRGԞح*dpl!@!>䵻 N=hX{ !~ 1K$(2LuJ$@mO2p*H eS |wլSAp8`J=mLyUکb$Lq 9dPH{|&O;$SR#Q~HH?@IUPlE8Lϣ_}=rIw;㏢unӖ 3"9 |]!ơ\7!VO).sr^pytȱL J|IHUtƣ9lpTN 뜩BG~68&VOz$| _=1M[~'! !60GU-C תL~X2šzןW:- Lh͜aE.XUB6[ ڃLK ($jr^10ق}K.-upHI`Dn-U$f+ވpQ%+\,E8\ɸ h_e^|1 0{O5Ǹ5P# h[Z Lv%}GD5!YY`UzxcTQuy|Q~F@ȴOŤ( NwXs8z*#teN y򺘃m 8rM*av"sl8SK-qMtO!cvYSL+N*Z O׋w:_ڸNN;%:NFZ%3=0q=kXI0K$qwR=؜j3yV]SoKi/JQͨX!3#niDFj?2,_5%RVpƆUILWn<d_6O.YґKo"D8d#p.nvm,[!F)ߥ|R)!Eߠ8CtnT"K7!cϟ=l4`6."g۞*NӒ Ǻb}zE4^w5 jYK =xOl>2*ٶe nMLBN r*IV”gX57 vӱ[BA饵^gNաH+g]b[~4y,pY$QG$s=ʢDy,8ҹa"7 $QI.U1&,d&h@Ь<뷉NRWf5`zoThrF욱h.bE4MoRC(YŋcSdwH:P0I,kz{QBU@:v4L 4yJ]-,1çU $7l: F8tV@tϠb -4oF\,=s˷0+BN]3-EZԒ(6bRM]m7R˵መ5mՃuVZ |ai*jmXXw6ESNs}sq;:[pt 1| S(čJ3t:Hd#`@EF"T%`q&tN'"ai /V|~:kÏth\e*BƠˮ!x9ie mAhJ&wR71e{{ L#uWZIޣ r/LpH$nvƝ12oMrh0Nqky MYV ڶaДZUpe ^;+j*oXP&o;VvsI8ڄ%Q`av& !4ɫͪŪ\2B@rBYJwǷi[E i3áŧO@OuCV1Z d2B7ƴ ܏ޠk(FY'lg>ӑt@ qsx@OB=Bj; b %'z+>dmD#z|Lm 4Qd⟥g7 'ChID`]sB_*][cB^ty.;/n G!?A@z{'K#'jt@l+Plw|Ŭ _La4$PT,Bz;y5T|גyރ^/6VȠޜ_"w'W)mIB ~ r|?S.X{![Muygǔ"8615V prcu3tr (VO_O.=>V-L y!PZ])2 4jZǮ\d}Q#(r vL,}9*`U6A"QB'Hd"8d-6pB.;e6}`džmlO0ESA'pJZ[m=QTzZt$j3U\Ġ.q.Q@% 8n@ QRMfՅ>Q|;tQ)>f7|ﻓHt .j!}>Eha,R?*G#_jiDήtiwZzйr`:#/oee4#υ(mG !ォQ"%J@H[lvE Krm$;i^͐-9 v$H~!Ͱ,NcmNr>|u=C 7~ Fb~^29{RF]##Mfp"OO>ZS[6.اNuLNɍ : I,}D %N޼x] ybc_TYЎ!/'uc{呡wK\pW`9[W\ Oz,\Ðߍ;\˫/F}d{2ZW}E&P?u[>;{K^B?yڴ[hYTY~wрz`sDA:OށxؾIF}qtBϝ6w.&5@묫 IʲVJvȫwy|'>?8|{vVIڵ13y؉ۈǀ;P1 0z a&"`2D'p&:( ,8־δEh28|*mi*`TC<†C2YƟL8y2 S0sB#o~d 0nY(@ԊsDFsP@BׂB0ZG %>)ŀիP#EXxN<> P Hя`A ;B3 NYx'GXC6<=kpWI EO"\?Uml ꓗ$Ț@a\Ew KRfQ@i|$#>[-l6aGaʱAߒ s",:'Ԁ>='6 O*˪~Ṷ}CL }kq V6 8 9ѓOxܵPمGePBf㹽k y /44B m[DxO+Xu$|"~ l@tl@N H%'-<0s5N[={H $l{|>O e.ZP'"`nPx>7|aEh1T0: m1@,ƥXŊ&Av ;&1vZR7 kUX |AuqQl==5Q7۹c`s n9(?$Ua70l=@{U( .N8pR@G &&A;RFIqoʋ ,-UxAp>.&0^TUY9$}gT R"'5Tɐ޻LI1wȟjC)/潘x W^s˅_#SA&NPb@wBI[P@%T#\x i _-0jMVg#Vgr#`!LpO>dW$w'b߹XE@S?|j?tujvS5Ĩu+Cj_ 0O[Jw:8[L,+)E)+m %YSw*MӀB VLD; \J[1 25 T =_wVP: W<_X#KsEE41Gq/ҽT~ jpuo9UKA (l ޴;R-(S',6Mfr:?m'Ü0O= `%Cp.yA+@pʏ||Q;𳐍I;U:']L"_ ?*ƈ~/8@*0La_:>pml|;tMZޕP}rVdRdykrIIvU1k [ekڦŅ^˗+[M9ɪ6AJ𫁃t-8:ёLװfH#] j҇yT&=h(Q0Fl ܑgiy{Y~w%>;@)>+|scpth07tpc$|BEz}27AGt&_!_hK'nUi1t>)ϧodUGݜM>=,&/7@ꦇ°m{ = 썬֯62e|[<oKZ,Q?7-Yt q}ۄj$n0lq[Gּ$X}Q/(lg"/[V2_"`|5ee(>X&!4O.kS'>:V|Rβ3Xz•ӲPW8@_h|~-kG'׳qUp3`>?V|~y~R^ev/1^¢7L߿O4eKo=x~o"R6% L҄(D"F9-TTgc"V%`˳,qGX34Oh 7C6~fRRY(Qk>SU@x, |7KgJI6TMSj-Űu1AE&7K4B; YMPõj%yR&QF AaeE‡ꮥ"EPCƍިE]uuؖlg,eQXqF&[PT[ʣfYLGoΠU (kDt̥Vnćm1,,(AY4F:M/l x<+ga*18ٍA>3 TOsvt*2E$}Vڑ'Qcn Mia"w)u@o I[YCm,Ax7RC7馦cޭ4{1oI~ +wLltMb._3zwUyYMfqH CN[2O98uJJ|(%VSug2eUv.E' {odj 뉼/IL[SjV}= nd;}0GPG08_Qq5]ASPMJwiSzFlUzf;fԲiڇwz;.0Q{J APwA SPAYWPlwAe ( w.( ;.h0A{  h4ApwAÇ SpAEWPbwAT h4AhwA SAr"ɵ!K ͍yV^͈gC-.7 6 ol@3XsS ˢ.+Emsă([DR `\ݩ#XX+ucUqQ63 T2G8YNuQ\f,z<׵u홣n4use&-։gd8LΗPS(/_ӼJ&ZMsRe/=qI('X_:IԸ7y]Hfd"{H3[!G;J~U%) dp9}b4۳[ recpAUjj"@{mޭ! $T!7龜zh7~R 6j{z:ۀVlN~SL `̍ZJ×2 ,7ڬ[_oԋ[zNv4ڔՅ.MhOd'IM,\buYۖw6L_d7pD&e]:6!'rvΧB9P(W̦!M0 n5Ts8hLaww.(_$! $Vz!4"㼞c,fշDt{ Jڔn[m&R뮙0e jxt%j$&])FOFd*y]Hi9A{f?zƾmy}Gs55S-uuz;Me`A/0z61L_xmto npS?7Pb/ ^ n)4F`W{2ZG"N NV^ͮ*Cg73TQS3,< ResfIDx8Ɛ0g ѠGMd8ԢB#s 0vʒ֍zm%mt"W \덎t w8ZkG7]}Э$j`1* <ѺF8|IW~ubzVrzx fZ 79tDw(}h0yZKGY4嫦7:C ~^T;?йdj6ꇛN`}L\tK拋2wdI@Oʬb`^2uyo4|'mlyU}rV|olJ9fNqMYi3d Zchfk gMM)lc^/O*`[-uɶh}E^y?p{hJ0!m*##U0%b3U`=Mr$!QahKoѫI5<*=b*J^ {RomV aZaֺէ,-ω`, alH$Qd&z~1W~:jrnb] G+c[6 #:EG@nHN񨉁:4WBI˲ʽefh*$ J=km#ɮE Tz?8@0Iٝ5<ȧ -IIsImsƾNOՂ@Г}Xuu٬i}^&j烟6Sj4v^V-נ3X\~lk<:x~'إ]gp 2^ ͗z#rA:+"+ Q>F$ey|CM^ᄐ%V-uj.o.uԗ{cu}onTח?_~yМt/̗? RPvm= ni |΁A )q\B8r}-/稒?]׳w_uRTi3wLP4h+C>ۿdx񊼲wi ~bsan|>$x\ %>۷ߑ 1'vدN~Myw6 ߑ`ljG|rzxx@%q|!Ni媂u3j 깋j lb̧4Z6e?F|@VYsИ/V!85n@Tvg^{|kG;_ W!Ԯz_Vf :{jh'wVלQzjKx*+3+˩"$Cl._Dݫ}F9?]nM ~௼%^{hRq22JdHa5qk%SfqDUjT]sE#,L$Նx]91da!F 5"O5/@cHL֎Y&T@. FFv B,JT sLR*3F!I6]D[D" g$PIOP#=vрPlb9ky,g- wQcQ!T80H܍ q\Bv+K @Uో%g&bb4ۭJŬhk.3(b̆#b$}jm _YUI9M ABI`ZzU,aH"q9mq;'we0 Q,)D [b h|BɇbR@l^4:l6k8Y&fIBV4D@kN1~!M7˗Lt~ єDzۯ?Y?4 @RkVËNz"hqEh.+ ۋ U+pδLI` 0#4QY7@(- |[hz ϙ1. D*|*5APu)1IIA #8LcRW By`"3MB7;n⭓Xjp`-|V?|hcsO3!kXFIpZ< qH:mnS=xdoVP`lM6}7rT[L׵#yc>ӎ(2CH:YzX>4s-³Hj̸u<)~!tΥZ&F01bRD1Ž2Ap #d>:xGEv2 QX>.nVF`p1[`Xߨ.Ped&LCBsxWc$DjϞ@9:9)W49㡐 xsr26;:VJ0OJճ;2e> p#鴵}kvw2L GqNqV9{P9ݔa ōm6 %axG)D3SƈV5"VY0LZx5TDD>;,ä01bcջn819 p} S xTyXfj|uadw̩ ti[d%U 1.=/Ii֫E*2rnhXz7z(28+@ pTgA#dmoci;QGs:4ni5/v"L&O kUFHk-2ԡtgu2긋QUg}͈2qH_Ec)`|F̳ n&œ5c,GDHutf?5Y/Q@1uEM V^EcRĘOq\&8vn<08UI:a1ͽ!Ei,;8|Iў9޴ƇLbܖEޡ$(?ݺsپocH2Xljː!dݦ :I/L@ui)^o权?pW`d*(Ӊm$bbGL)`)}KdGW"m.F tԫ&rݙb":<:}( !kCгv/y{Tp3Dxi7E-Yج.nƛQ|9}3p :juWW6㉈]٨z]o֫߱'a-E>\pOT7\񧼽x?at}/xԗԥS1~?S`3Ϥ%Ùra~v Ճ__gjV^l6Z2'jɎRAUl-YҔVyx1)[l'3ջ*ݴw͙,?b̖_#]-g%'Aaϊe hS5}D1ikIp u>=u5(lΜvYGBOk:-nJ;l; ;/L]G Qk&RFgcFR6"4ΩNЗ74)ҘsvWXG%DǕo,>~\-+2W_ozVF7.oFfx\EPyrY ukjZݯS~Gy͈#u;pL<(usLF3Z,#kgŒ68&=1ѷ隺X83aJݦD&Z$|0Bl !qsBPU.I:xt† IכfjNC9Bs&@νS]6l = ډ(YAH:\_ϛyxjNpI"wx:M081NIRZ>QHK=*QNt8Ջ+G 8@'>F?9k[(Mp<,8­y A܌JԇIhѨ8*a&XCwm#I_!nv10 ^p`ͦ,yH)WMRm9Q+TӢ pb=I3[&*:a*8ͅV#mxa(8̏}֊vPf^<ςBKdp @}t\3!]xnwx#'J LEDG/r4ZT!n>U&C6m!M7*6gWD~違.{_^)&>Y(x 5 NÀ FXlGectYވ־`2 &{iv>2G"yܥ 3,/)SIlQ+CFFvb njgN2|&F-p&rc@e(d͛ϋ}9 ! vb\m<.V%(8ַY'&YtNcl?ͳ(#3!ASc>8$)K-duom3:"V 8pC3"L2UL3j +Yة:`AXp =:jtHo jGlj)=j-n䳾&dµnfсa Q*W^I:w;GyA,Wa{7dMǖDc>E͔N_t=nA!A{~qu]VEhslG/fK$uu\qtR-?ǧ4~:Oɛ:?;r`=N}й`73_Qy:uGJW͗ut/ 99O ,^'TbDN='G?O f<7og\V.?I@p,.}_p8!c>qKT:eB“[Snqb]h0|H[ߣvN S\(_FR +ºA%I+ QWZK$UֱC ;JoԳnG4.`(L8UR㳪L5KinKA-"*eJP)jU0'aG} ٸw՗+QFq`5o1Vs* v}kﶦ]P8-#FƦ)=1Z2hHb2XG~r(D5F@S;-!*F,A0U>cH bԖНv*ݜЀv*wwǏ}ǁ͓H_CH0RT_׹y4v& E S $F6i:W8MHIfvuhrM[m.ҷ%Or~U"umߜIe 6ꣽќIVHt&/UZL͢E# [!6>ق.e$֩`EA͵5!#m, j\S^&!SpԜ,>* %B36Jq&"]Hp0:O멙+h/W_ΥQ208qDpVQ[N]Fe`%/(82<`5AuwW#2*kb-D3(KGK;\Q ߹OQc+R0e3B|jQΦ.CZd0ر2GlqQdQp7}*ψOhRȢ &*G(ѣ*RדPQ2mX@;O\N頄QZڨdy(628T(Ijb1u >$0|(PsJ.Jqa^1-AALh&#B!# cI)`YkjODs`zȪxcZ(lnsè3Ѭ BF8KŨjtwJk龋w8p}WBz"(T=mH^Y T{:^:pYC'*X*/gyjf 0LyD}]+B|]{(La]5Lg2:,0BFQJ&+DQ )jeF:]Ɍe.qΘfQTկeg&\ W{-”ULvH]YfU}9W5|b2TW8ȵkk _Q4l>s~na/k?y$YWj^~zۖԫUǭgOZ*$#R| 3|9L'= 3EP|~h t2}Ktyt7Ǩ¼wX*Z:'3\j/}w=xG^~ uDjͦ1 l~3}vW"7]7ȕbJ Xȏ{yѾݺ#(xL ]v:$W|WMyp8y9|k^rUֲ^u-ּ ,.7tu֌QmS/D0tK\D%?p]5)]-5A8]/*eԬ{?D }ĩ.oJpH<^ _w2).4AkSz[30nˋټ9 35uHd|zs!]}HM뇼ԫYYQm}dߡZHv7 wѿ0kЦ~ԋof3/~=C{ɦU%f8˝y[ɔtu?o+оa`Ѳ&3ILzY7lZqE0A::?T^e/v}yjq 2-fށSk˨6O ,l DXba54koc7ZCcu}w.u%%MGF0#m#bB:׶F]eYR޹%X a7FLT.Ud:mK';03H\Fj{zڳ'|?+Ԏf!tǾeԤ8J"÷^1h6 )}_]s׆4⣳Kz韮~ǚ$nh/GKUk$w&,k'9{M9wl=sZz F_ fESP- 0rXkJ8!e)Ava$eUi; /^̷z/Cz:ܲK=o0c{x=cըlkF5 xxYNX3Ml|gf6SyoGg\1=ؘA-{%E,5mG(wc6c $K[.ۣ1QiBE]?Rxϸ±+_ =TIB57텛5VԕN;)0I6ޓޡ5M%R/TR1\W)၈~}1(ERRHZ(FAK2&nrDܠܺFʁAԽD@+$k\vP8|0Vd\PZ{p2[:bI6I5Y|!CI{DL[BtiŘfمL~p `?t$ׄz֢ ůn^_\I-2AU ecgR{BfuVg_^o)wYP%̚  Ƙ,L*2Qh6-6;o˰rXH`?EN|ۊkE3LbGp鞯BWHl~,7Ow+Sc%lrc(<ɭrIH[vt5"BdړoU A3Dyrh~J(f"J?=UWT?vMp,gY|?^҇<+ԯd#c (,%C:Α>āXj$ɃD⩻R?}>AW05 Vqh5O/xzH>8X$T]"j1J@0{X*)< Cd{U㭤xge(2#q*WwL֌?dhZwus9:N;<$\p)G%'kjƼzҚ0Eԙ%#{q)Hx nՙNmSEuյj˯ _,ɕ/0)>`s3}vv@;h*8R_JctU,e{9JjͶ~z`괚N5͖%9_|W~*M˭E@*p ˳0r41 y&u$'y]?hub2 XD{yifϯ^:o]ڋݦ>]yF 'Zdh3xYxv/tbh64 ꌽI W۷fZdtݾø 6LT}=RJQ|{um47wANB\Lj<B.j;8֝]$~OYμo Ol-U?L'WwmhPKEnnѤiQpbFTIN I= ci<<<|#܌Ì0׺01˗{G+%BqIl~ )@G%Ѧ"63dT{/=iQJrĿT%b~Aw\De+8G_=o 5 L V iboAMEڦ/fʻ_L'{^e>|&t/} WK;f./ízRd#$Upi⊖cU>>\}H%?=F~z,}!+\\8Y{yP^-Vs U =[Σϥv {^yEZӨK&0ϋapY5aέ3 \ "0*,hFAO  HZ{buXMHׄv`vRNPwυ;x.6;8<5OKVT.NdiMI)*t@۟FTlef$E))>cU4@P 6t!`0lFݰi(Ya\[ʀkddkxT`&J11(SRYCE К"3$σš02nU`Gy;qd8jHtK/EL0j ,y/M\%%x'wu%~>23Ǽcᕦ Xn9|5KmIKW|*5I}u\} B20o *BHd`+^c,%yk NLZ5U0ڦ`2Ԛ *)A!IU0cViO2gf3D=Gì0sf!$5{ E*li D*y˵7F&$!4I IY0 DQ4srh /T?2a&Zd, XM< S$H $,)LF Q D\fQF=(j b)ӫ"Ao3)8fʉVV7O3⨗>=H;ŏykZ[ʹ耘B()!$f#!e.)W%rݭ45h;d nYXw֥cuv(eϫAg EN Juh$IL.VSS33@1Yo.g&Wfܛ_Mnwt~i"`z$ ,pۇme-o$I+5s+Z٦0ߒ.I}XԴ'11gjX\M.//wY~%mJjpy dd.F8m$[' T 6đ2JVNi 2/ -ޘ*xwT |QЁjX)`Ҡ'- a1I20'D4SmeI(v@ Ui~Z>$-;Q.Y@`l[ZNH YӞ䀰$ &Ta%I`޺1I\Z] EZ(#kE@ZBufDq;s)dZ *cy mMS#JZWnc&nGk[.;XH-+0ڇH v%wt@\vb%~K}wVNR -$%#Ƒ4%i9i-'Oڳ ibZ<#@'ro7MZ3bF0I:F);:^2 M/%4dWGyB:+ 7:fquyiy[)a׺nB'3\-Ib$)|)| >kmkǕ{sZ]PC}E]}ҞmI_Cj&t",.i!=I-6sphM@"|l}he{R?G|/_̌'"ϰ@n ,7ZήS֌/%4ryRb_dž@6}tIw3-ۗѥJ{ |y r~x|ٟ~6uקMM^~ȵRXS Loo@אA?@7`_Rj,lН{P2J*pl_g$V0Vn\}vYF|:BZovGЩ ,9l(^%k:~^aqa>޲݆fjb | !ǜ_"`Xm@f↌'wE" "b!8oذBrbXB^Wwal0lgn>~oIce`b/[yU-HFr 5kWp_<ޕ(m=YZ*g=Bt|`g^k}94-@/v7,3I^ѢrK:YS+ΘVHoeQQ%w!eﯓʟfͿǯ'>]Qx fO3aD77ml0aV(ix㺁 ,/ǟ[sZRdݻ'JhQQ |rN9?~=0^d:^D IqiڼN@y>6b7Eq`4^Ķ -i5+~h ݐ?$=kZS`hxAm_GCm yhhx/+JH}g'b:R羽~зh?=핡؋s5r ˇrdƣ4߮K3XkL~]UҐa{|4uB}#{pat-.j*1uvAw>{ YKwlX[9kX}}H[j]ݮC7`3G5L\7r:01uy-]=T9^~R @=7!?^`Pn @#$N#xeEI~9qcנ7PMo!|'/$weԪY7"ؑs~>5MzؘXv#sUמQ+瞩6iU_Nq*rO p~= aXv#Xg _M xkZUPjlz9]"WG跲tPmO5w9-5Dخ2k3(-s*o8y¡S:pR8uruEWtN[b;{@{Ӽu!H DR3Qw:_H/əh49-ѥK=RMjȥjR*у!u~aNI'ɉjuTϟ&nSͳֹHehLQPU0`=R7ZRy7QLQ9Vx>^b)k5fYe?}tyZԢ5#) OIaPR̀JLwH;I"17F==ǒ"if3Y٭GuWIjC]WR-Թ^/xm7fKboˉ'ɴ3V-is3`w^ސ7C-ќ9ߓ#T7ʨF]E87*Gԅ7߽`<B2ng?ˉpܴO轝WHǏlp+{9VY&#UB,r<\Q&S. S\Së˫/eOV0[>n//?߼_ aA-~K8|,X6r ")x\WMRI_S68>@+Ic䶺:Y XwC__~Cw[ipUHmtNJywC<k߿&|pfw?,:|c:Piô:f0}xXSG]M.ixիݫ?Sg8ӆy˻ZKJfZT]RtY! k߶xbQ;{ެ@3Fg&XKrVJ$|~EEUZ hjk-ci ۴m \tC IdO0L%=26` )eΩN-{TI1FjWÏZeP2~Cv{, ;R%R+.?g!O>gwݮ)r9:mx8 qNE֑wSnZ, 7Mh:[]x\jc#Nmle4wd M&u2js3ӆq"'KJN~l3P^qYcc8v2,f=+e 3{o/w9._F?^7?jem{& Õ.0/ԫOcoaeK1IY DAYnHL&2 N' :x@bdM0n}acijnx!}) WE ǵEe9/Ml[E(;"/.XNցAW'[wr$ t@Y :=@xIŰ\\S$CBOh"/@&@2l H{+Զ-U2k و@ȸ48 ! Ca1~iG`$C8"@D0 Po*3S=0xml=VMZߞ"jt 4llz@lSИ`d>htL}5.=WmziLv*S,Tŀ!Az;CՑ4 T wP9ʊ0y>0foԅkw0N8hhgRo˲7cFdo%tP' jTѐ|PUtMCy$4A2:7J i<˓`jcߦ.uFs )/:R K'df#H%{=Rqf&v!B:Rzf{PCH7=Jߔ98m TLUɲ7㓽 TY;#͵Y쵂0zwHuǏTf4Ϊ7cRV Tט32Yu@vhQtz=2!ՓAT9~: wYu5>`fN0ByG[G({]V@ʌyLG+{ʑ|pQˑ 1eoG({D!U/W=7*{lGQ}:hNˋy.ڬ_Euzu}N.z:0o+?]XlnuA_WllFʺ)_9jF7in\E~c;Oϒ|)_vRnGhW>3mſ|*кj[[`Y72o|3֎,ޔ%˞M/:)lXao'WaA~yqr7ۧ+8 !h yp6>3Um])떋FWmU?M\xlEC'8%z*h*[8ZXy mCU%G*ڐM$tKUVjB eK. 瑽 'Pk;R@":+* F*F4VY=6&A1aԦ4?<笎qVniCc][iֆ1hYW4eH0@Wln Ho]B<:ƍ̇\x9ΈnOM=UFjr]=Fz+a,鹛[p)ҷ\ƈi‹}]hYqS<4,Oqfs R^8F'(ƍ̆G=#z<":a~QJ9zW\uD.L[SVVA`aԔX#Vc`#{=qLaCc`kyN1#0",5ZLaϏ~8y޶N$6s#@tYmA9m tB11c;cFf&gZ9y54N+BU'vV%#Fd sU3M | EXbu@ u EWA)pBmQk)`)5Ff}A->[T slqşo.mH{ -&Q$ 1nd6 +y3'Zr ״m @`ٺ*إ7U-Rq+ΔdK#-1{NFQqVGt$&ب$٨"Bmz5dw^ ]@&yl,6}z>Tt&=gxbl#lΌ !kBB3Þ E$ipOQ(kNCZ*]NBPuixS<|G|>V2&G4wLH>ifd i(jQ)d`d.dm&ɤf-l]e2-r-hՓjRVpIl☹dKmdjoOo(ɽ_3S׼b[VUW[7zSPӴ ;Z[ӆ9}v{ŲGQ2ӸٰX9D; pN# ]=n^5 G-S2H6fCOV& )z| P3iD<–ؤpѸ÷Zj[{uk zY\*ukq};Ca}~XOD2FfC2^5u<䚺Nө>ßM#a.yosu㍫0W|^j]':@q(k Zތ7hL22xZFOK#[_dii%kvC6{/++#,r 6]࿤ "Lm̆n\N^wɥ ѷgyQv}٠$1 F>,H$QfAoAyx_zDbgXnjؔ!98Je؀ ߚXlJm50~ゴNa o9[Wx omez.8{~ ;)'l8Db6!Gt#:\+\^0;LS2Kq y笧$52N404v"J"PQoLq $I02;BERٝى;Yv R Mq#%#kR)d"_4U뷈_ގL`YIp$QըP2"ovweQɏMPWY=zPaxm58&l8Pw*LYJCےCsBkXۜ((Aec/IF93 SMK`- baol3PLdd6D6w^'଻:לL ~jO<<ac1gdEIT5nd6TEbl!hKQvr@p-< > Fro2Pq ."{ bʫ<|@r +⺊mױcU[` E 5ž$0E0 \DQfSLQ]H ; 72֊~ЛT>Cu9[@c5G x>S̆S?{WȍdOݖQ`SkۃF*UʲjԑJJLa%%3d<\ŔuD0e xF[l3X R˲M0`|sTѫ˱d[A#X_P, j Z0 ~^#mlj!; E?'x1͍\ j7vf(U@$a +ǭԆ*c\ [f73, 6b~ & *d+'( onbLJw4_ (L h/~3FU XIY,`D} Ӵ%`M>y6DWQiQzشG#czoHDtl31&(WTV5Z cқpL%o[dDv,H)S6p3!Q;"#dJbS15IߜƬJ!$Ӟ94 lB/AT,Pmbʮ[%J/u6.ƞzyBА={ <9SѰآƞs<٣1h!\XALU۟S`k mVP~ ^z8!T)!l,l(ra\-+U\-ٲ7?)K# JX!'= z5r1 Tcq[H!,5zkcc;4Ys%s'R`URub:;0x2+2{ S4;0. ̗G 'fthvM?~aZn5ot|'i 4u{-G@~}pɸi2*=oJk?;y|[C,m!-K\mbl},D([C a5!#^ l|~]-)&|{ ܓT+ #{[91ہ]5e#\[\0L(U\j %dk|덻 dIzoI7 ͟]{wMu-'0h5o]x1|f2LY+|~n:`$G->\Ujb܌Rm65rT2-d(0WX`k~X}(H1\ 5{bʦV.Zk\2sӃ2CX%n.tP"$Sj$ަ0;$ή/*b1o 82v@VVJZ=J11Q?Tȗ~nei'` EIȶ|<Ϸ~>m#`󯋜v/GUD!c `t^Zu!Mý_(yІZxaT/uR离@]fS0^\X=όa/ϭh @܄za_[}hZQԇKc0FuX%0=S4܃}g@0|G@o]kabKh{c0}GskM9ؽ^zN<H>[NGD NGs.9B׍nLK1ĸF{L&FrÙ?DM ~S Qz  sZ-|=/)S@mLZ9cC>N敠B=Ղ&3:w$=ޑʲkh^Vܶ/Y.qOwiE G;}%nN(ƥԊVCV+OЧkLgQL CB:rl0!v!6x}Iȁ@Ek#E* 8`Z\L:BKvw) ز;sCWNQT6+jeGⰦFc?6* LuOݖo`IqxL~M%#DPZ9h^Tm<Ur0!9 wWpR, w 'W9Y&k{ (?)`ژ%DSH `Zv{*:!|F4NGm)Jo\%ST2Pm8q@Wɽ3m@܍KkG -M\SFۍiNz{'^_bSUhmu;$J:H HG|`Viin1ix(TU O!X:g+ߌAS)ebAyOH퍴ޗ2wky}u!nOA#:MtH;e'KJ=u҉3u"f:2$:%KƷ'XPn U@a7\ h$c!@p_;HnYj 31O򒖾ۤ 4SéU0(ʬ9_+ Ƥ|337'9tt6wRjm3&ږJ" Џ3mvWA.AG/}oeg/\t&.ݏy6'~&gK{y BZ*: x5lDcd(gcìhL{jLYӑ|)ѡN3e8v LF!mGcPr+b*a610AAiZ (>t~_q~dY/Ƃ KNK.O0FXv'q {Ctq'i3soLM[S+riYIj(~)*GqhuCD'wvfwSZ Y5-碭iljaQ5f|eX. !߹]|1\dDE|ʐt@ku jsF 6dג]h>Hk^gu;tSE-j<|ikT#D7^R#e(2^pG9L*(?¥UD[@ZB>@F٠1.捵Ѯ^c1;)ڂKCk~Iኬ(=cycyZX^:bZZrǏk ̎kqٻ\fSQ~jCBZa0[.Y,." NnEXKM#V'OJ*U`}.Y,t2"zٶc/#(wˈBTtUqJf i'muw7o |xut<&vy}6vNIڳ[R٥[??N4+6hvs &+(L/uo>FiS;at)d0ˣTY,*,LeqthA ֫%O'C]~  I]]4:~1d_b VyJzQJ&[MBŚUY0>G:^OrvLɱ uk H+9ʮJ7^ - ޕ) 97stZi." QVN ާ‘̾1KMu^5v`>r(i* wL-iveCH Sr$RVKL#IY--=6)'%eUTIz򤬎b;vf ùB_*j :gu1:|ꐞ(u_RYsYwkā"AΕy!4h̫Ә̟N zpcknYhE+bcm-1$h{Z,oKE΅p+nY}Dxj TIOIp?bG8r樁cz+-PETE[ `@isioYIjn̖r(/Z;x FVw]p̀n|wRw4 !PR$ȷVս`l,0Ydg ;cb%Q)xsIQֳwF (>}ֽTiG͟._Aۮȍ~!`Cww'aW/kjZ9'@<|7Iv{m1lڗB_j}8 *y"8Zkx\> BX/ko+ʆ:|0mx?Sss*< 1#d?#=ٿMj_os0%y7(:K hG#8hyzۿea(x,l^@M"|[]n-pиo{5,ׯk]"VD0m[z{`wO o[Oa͙yuh% Tl(?,jG 9I٦[R|.V'x&Y WM"r,}^6@ہ|Z][?Ue֏RxFUkE QO?Ӧ=:'dgiW8C6ӰU˽cWoJN{ןou\ƶA xh!~~5{c.o q ۗw8pO61Ja0)!6_ !FNسcnػw쌤Ov|׉l>|qۏ]PJbw4xsrG=(_ۏ^|\Z~E1BxP322AAx^ψ k/}t3N=duHWZÇ6}vh F?Y{F,q\ټ (է6nDJFVĮȤRKu"Q۞w6ǿ~!w)$hU<ΪPl/U"%IjN T ZGah 4MhwHwaۀ-qȻru˯7pckE3 cj)8lB$jK8R̔Qr=kN[{)(|+EeMDV 9y[b)`0SU2qg8"O>*vZ\q*fUQ"cz-NYCOM>?F0rvDր"Z({QTe0E\D(${!#C\6bdu lb I#iuIX\OF`HQ <#(C,~rwS%F4$7Q<#apdۗV {(UGEA%ک6ygTcA' .CκDi,'B0ʄ]zdZmY՟ uJ$^Q"s[[Pt"8FҖXUed&&*5Q?BUo 83+6ؑPr; Bdng? EՈSD]X6e=J*m[Na&%%^2q81.GCN-x=y%* XL8:_0 !>gz Ny$D$啍2W?ŴŤ,$JBAxP$Dd(#ZߔĨ)); ւ9i&l,T2fA(}F7չ$zcle)QoAѬ# ,)Z7PReڠ:NKllijͲu(.:sA\-J*)i &Њ!%<%(W x^60 F4%XRv vվ 0XS UQM)gTR )` -M2j ղͅ%HG.hΨ.} &DȬ(X(Px6=j$Ct}U"3ƭ@@U Ut<+Xt{0CഓFW,,١)Sh KQq|ڱ2fd(XkJM ©5)c8*53TNhT(Xte*B$[Z&CVK)ɑ %Iw0W ҮzkEC%xLRehbG\dnh΢u`Y%lf()b="%S:J3PE6ku)^肺4.6q:AvOjZ=iϜYX2lȲ`3")Tg"~רUH.rD(ID Ɋ#T,ǖ9"L"HfBCisH]: ۇmgc\N"a%AJ٣p' @q٤JQM4̈qBZqY.6DK`CM.+j2` є54#]3mCVTj : (2 gh#9iUM[(Tma'@vh1vSU$5PT쭧 q.YY]|V3غ7LIDNU-&cڥ\metR%vsL@#Y{v,DLmk Z ҢhW4Hv6WF߭qP'=P`(A3\[f2WcS¹bX$p4@Hň )Um"}R%.k$Ɗn(2ns2@ M/]MN:ɫc~ƳPZ*F# JC!`5tFs *Px!'@&>\Kb )sS ЮDvmǎ+ $SZV]nn.tv9,\*e`_ Q)D,oל 'r>n6dzqvOwivOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwtOwx;Տmh;a|9^_ +FVep~ԇ,~W٪+˰p bk X4,#,G$gN"`9%_,Ton_\\e PQ~8[w}uLO.Dd{nRiF5mPi綪k߼:fCZ^*uJ0E9@ >Vi;dև=38R.߾2ʾc>wo@t5N 'Z+j&`3ۦEfVi3m&h&`ͰJw&`w(3 Xҷ9[ޯ2SCeXfbs}~@n>3-+B>(F/ǃB%!w`J2t3ۼ)f.Ղ3kfV(I~&`P m͗m Hfȉ¢, ,n&`pjᔛ X#s)]m3d>k=Y%`9`7_ew,n/vһ\گl*Kߥi_.mOKSFm߯e}8_ cr_/6T#sQ7ŝwtVܖMZ_cj6_AND[OJ^M㏼8 >%$d'޽ $H(-{w&|1?ŜޖxYzL9Hmߡǥd‹$Un 9> wI>4P> V+ĒS u`<Iّ˜ ;!Vײ8Dz8 9GBb&X@2ВSP\"a{L,Fw1]M)Mۏ\JD:ԫg;.$}q1/Fc78v]ӕnґq_0ƣ~q'Me::b)(TNXl%cIԱN GBĊc!V$:'$[JLoA|*L|*L"ܭqlLϥcwN[$+ڟp 3Vg:V~^jc!Dr &]F.;ryp֋\etGmWgXt00yv2x 1ÌYtzgfW֋\iyE=?ڣ(+~͕$^oVb=.a+,LwwYb-J'3ďGS"wݥFdv*K.yٯKWڻ3rsE* ~P:c^ZR:?Λ({^1j4ḣL hz.ߦBҕSJY!Q;ƃjLև~G'q.RL[SBKбX M-#Rh*9o-D*ѼnRPORJKJ.G0u4T+ԱR;RMe=nuѥ*nL0D ү@ǣ_5Zf^Kjj%zUSڨSԯL9[nKq93t4Wɱh\MxqdMrZ5`+ \)E]=`0q)+6l=a.Ր3ADo4%n\vg jw|PJ|"_X LdB,'>V Gޞ_m>F l}z N-?6h@w%r;,nu6Job}ג(N>; ؁ *o&s2 čC O@*)/bx×Za;S}Ƙ?/a3sT~ɯƅsa]fQeǃW^ƣp7$dyV Ԏ|~unnS;2].~Ɏ'uxY@/0t6d[7j(s{7xNZq΋,4VcolxqxkpwA`\~vN(ŵ ##(M[V$IN(R6ܵ)\^ulJ$NvN3}agwYFgmzABVԟLGZTQFfPǻП軷v`bgJ.:R\54g[MM4WC“q$T:-C$lIF].HT 1DĴ<Q5PY-nGb{Ps.X'۴QV;DͼGÚ0O`#4B;\J:_S>Ǭ|vMǔ󮖮utos8Bw_p! P2ئ* M "Hs{xziyҿ$lAn^O>t~vʬ7@*$e 14WL>\5,^^z?k(쿸}1|/Ր}Xyf1ND ؀ϙ8);ŢC6TrNOFeґ8 y!1Q߬߯os'h }COZ?Ō\$r%\KB+8ا%cX'UD.1@"/c9}L.uz‹gsDS8 U 0H_QXWalt5$;!@Ak0Fw^18gwN}:7/6Ovtr͉IuJBxsgIX4|O+/'˃~xy*”uyoUȊK18w:XlBK)czo·_/6y]@Lt ջ RoTjzv}>Y]wlx $fkv;Q̝PzcX;H7%Av=:qzW>-=0ZЭ̛Yo@Zt]ũP0C"3$l5&~б {zդa{j;k#2=ĶriZkgUk1@ 1f, y&3Khg)mO1:gy6)oœZ[j tЪXZL.o9;L7-*BD8d@|2H!2 !Mplf>\ ay$XhBA':xOAr0q")@a*Ge.0bD'޸<]vqbŋ"ت{ۆ/j~b#[#*bv!ߦg'\jCI%;:`[u0$@JrqF8"Їw* Vnowyآ(mUЎmdg+‚pH'WnIL@Ю9>" /J,6-+a++-$k W•6-\i W•p!B5EGt@&NmV ~u@?59mh}:J.Hήu0:ήu/:ή{*w@&L(Ʌ?G9S= }^Ո|yK%ڔұ^[)9}$Vt~&=[\󗚂R!TJ9Hib_M]ӓ";rW!/}N2"|+sR<mp'\!ء̜Ӫ'녶Mp:gwl@uJ6nӇ 'dP< 'ۋLө>H/&:LcKbM@nH9.`SOF!\z |ÙĮ2#eYqlM(Pz=dg:fZc^y̞Zt`*j:ՒksG@B5|7!Np[! e:f1&ߟZgi|"1O߅㹁 aNk`S:s&4\*KKh=4ӣ4)6iocA6%xdJTn6Ã{ c rj=k~ bm-b ҷeS*a:],0g4Ԭ6[ars죞r_n>)X;b! ϷIM@s=]zB+sΏ[>!R 6a;R>K ma9~GO8Q0+ЍڗB{y YG줔ܮ(#()qCXUx^\5%MRV)0D\ϽΟ kThCGCr_n鼇rYn8$c!je0lA3z5#lccQ5I˂M+$%ipi޶?1n[!xז|frm,l~QFjMGgxS M2V8R`ZӋyǓpNtNNΗha0ˈ5׳>6%':s7L6fNfL .0$ ٠!)dyhbPW,G3KQfm) {S9=ECd/6l+ds6l=E>&v3SFaXHٚ!Spwf~ eZl~6,Q%Z ?)Δ# \qNrWcjپ_T UO \ ř_\qw1X7U5X0hwܣUXPQz"-gՓVПJ<ͩՏp% l{V/6usPC-x<ҲӱLq.z,ZY^w`[fFA$R"!zIPL61a6Oy2jz22P𔴵 7ufgoT! 9(9QL?:x&V?7=Flere!.#UrRv{3nQA)+Lz)7'SDT3kw4*D_ 3֏]# f_ale~(yW7y=@_\QJU`U\XF-bF%f+KTmS(ϝ7O"v+q}̀xZ$cm4oC( GC;om+ r$w B m[sIC<s,c;i2j{דQ6~`ï xw/E~qր?l#F# ^ Х)L .a0H{EùygR{_CnEQJ/ICgKu3jEi.hXtjn;`#YF3Ŗ _K,]u}r/揓! Bqs+ꬃiWY~d}hњ(1)4oOC[\ Ԭ;vpBjduf>kOX䝝zڟZ7#5u21ffFŶLj9 bUeiiwDR?̖~ofd N^J#4Ƽ}w9oWw۳Wgߝ]z_N~uab8 {q}>pvt6n 7NuЧGB9T>'JHU<ƾMh\b|$8N a#@ JN^*%$`c ?{GT`XH'T9.̋RT>;-8/^W6@I E F^'q { LBE=V_y_e1;X(@] s۶+7iM 5k+呕tzFTQNz|(Iٲ̸.u$-AZ <6mjb y6 [Iޔ 5o+Dqc.Ġ6(Z&Š p1Ξ ;c+Vus]S] PL=PuuUWyfOu & ?@A(`0=F45-25[qvV S_szҪdSb0(ۦ`U&VA:=߳mk4mW]D4W+vo?*a3RaN >qm;_} MWRbbj.vVOU."ô3% bQ_m`ԡȵuQ SwkI<9zxe3Zmӌ_|X~Q\w[GgiSGTaRY"`Œ ~./{\:,6qT<@`%5beHlX0XkXW q@(L`^IpKnwrzۮBb)u9-#'k-bMX@[` xi>u[sܪ\qG_Z]}u۝NYI/t/b3 Y b2vmr=YH!0"Jyv\".xk#=X.L06A0ږ0MX M=XZ4۠+I|~3̮`=ȱ`n4AjXlV=` 럪aT+$|/g@$p2;LUjx_ų}*نG9:`8mߣ ?D"ݛ>x7hCgP.Nq/hҊ6,C(ܡ%tx&.Ẽ{/y?l&:'~8c/}C)!E{KHNۿ{|ӱaT$eZXW=ev;LV(zd^5ʖ6Iڜs]8y\d<W|%bhkh܄*ҭ,^$sGjJG6C|Xû!ro,gCrVA< 2xݡ)+ҟ}t{o826/g3d<^E^㋛b8軷S|8a[ǔuQ՟@$"{?߻?O\}u>7`lx2N=l7,s?I\9Wd"q^NG?{Vkwz˷^9~)$tsa6Pd6o)KeM*ivOX:$םH4">o ⊎{'XHmo(bpG{Mmಽ&('sJ͓4l8?PoFE<Шb3U[+CGgL~tkjrt zGgHGg^љ%+&'(^-wͿXlkQng~>KզTM6%ߟ˿ $vxN2_i|u}Ob-$-|IIM4@g^ p|sg0tl01 QUƚ#v=:V5,v}9`WwΈiO^Æ,G'r̓ ":(q+=_߿m~y$Kgyd/sAcjl ïԓo_H?cV gVABnd f/dO:V|:ҫE<.{c>$g3Ӄ*2;L}L mM&N6tk\q| sPǦC 6ww:oNN0"t kWC^3.AւV6[欃9]gJȃݟuLbE 7TY=~k) Awb;l/*krj!  ݝ9ko7=TBBku[e}Ҫѯ\y~~Goxeu5m ~oF8pĿ0Kt,gކh K ݀ȹ[sIUu/`#sq#{$GyF"]6s2FWcZF;`w74am`-z)Ɖ= a{sH8-Cړ -"yLTZێQuML4 +;t  egNz1r-+'ZOR%K85&n%3>v $rrEN=93ߛ"z&E &,*Qt:KNUx*)lCbm|sfZʸr+o)mlS<] Op}ãVywήAZ'nԹ ߚgCV*~W y*ri& Kqiz=.%V?4r+Y0UKtt}fj㖊a[6}Cɚac4HC\G|+=*в35N~bXYs9lG1} hVa$X&tW3q&0?鄙vNfG~, _ .(E:ΝYB?u݌ `О[Gq3-Qɟ=)ﱿQӚ 56h{ H /'/ņ-mb); a Thyˊ 8mae1ͻ|=Т5fk?{>>;}̥xJI0BY7_ cqhJǴhmaY@S9M(=)>yJGKY!_g8QpbNORrؐ1Ts:҃| TA?9Xܹٝqܺ,+jC54ZʐW@Kۦ`vٷ|w8hl ; xeh8K+Yi,T.?ٖ LgױDNqɘ/YFOZDM;3\w2/ƽ 9OZIt1;eг*R>uzW+D0#({r!ˏ=N.2^3`lj=ݰbuO1%)]bi] =hÛ!RBVk4qGlQnױ[f[c _a0ڠt*Dh @Yܲ8Eoޗ oMÖ59hCY S؞bdh؇}w[xM൵xmN״ת0a^$bV03PkNfA;+Rxt>&Ґ[@2<5'բ`=.!Uti`PUZgI3bgKd3? - #1['+:OTqL(Xve0۲IMUUϰLZ}=-AJ %Q>;᝚N]}dn7SV| dKE\.VB %D']SEOvD'NIx11K>`UђPVD)'* 'Tv>hm J-+mA3XX)Fv)݉dW"01ju#01*&"0LCoF7//*r\h^v_TEOv_T600*K Bvۺ\ڝWحxܩ-_+o@Tt.fc>n!@@2@D<GD@D< TbDD@DU:5CSeŪ}G'M葼ױPEDmFT*Y@xQ}jˍnoLO^\PW##`dOlW+$uDΌA֌rGXPk{tP$8aCP(K l%z}X5@lP2|W)[\Y cIfvfTT \_fP YhfgF%ssuC_Όh5@Tpafg7-ܳmx`mo(bSwU(G\whI+ؘa+K7}|,&;W3az鉨&瀉TՐ~R?]X> T`+/A3~T+'y:tn=_f[kNԊ\ˬZzBbk+jjZsjr˰dyv(ӭ2mΐ~% "AWG:VUfWtu ~:^R eDdǿEEd_DEdyS R9 u8BDz*YW}K! /k.v׵!AKnY Yèf,y.SYuff$3d)lF 00E"L6c-m1ʽjjDC *}nWGǮsqs#q4`?W,*Z$Z:35-*+̴!.0j /:7PfְzmOW"UTzez? ׳z]6hlc"YȊ)ICƖ/>U"ul4fDsLVvfNx y]?_,ǺeU薇e{3/1Xۆ 9ߴ]&#s{SXW*`0uKwJ}RetE~(l|02As3’".µY(JFlrC1Dj'Am'wz>p~,cLq&4;CPQ^el5^4fj-f~F=3:G7 >T 8>T@ZuadFbnu[u2iB٬) ̫*Jmp3&ȹTA;/äeZ۶_ANӴ7@9ܓ9yxb;o$A[D8u$%J$dˡ;U$>, |vBGa'sO %;l~+|$bf Il[Y8ꦰ.@/dJ_B ~-^w7r~ܵAc3c]&U0A/xNa=nCxgn_esD? #t>$D$_X{=>zs/N?~Gh>qo#Z?oONO>xi< Tӣi9^?.gY/_<}F %:}~W/NnҢG5䈡!|mb $?g$lǒGOK!,;8ߴpݴ~ n_%Bܽh:gCGWYJ N~gcBh(eF}4cPV. \ Qg g@eߜ@pt~!;\nϋ#p?1L>@lF'OJ!($'W\=;+&7?$:{ ZqY2Kh</zLMf>H$/\TuO@fB fN۰+T@XFbN *vBp2X`RJn5mHgb+Sm,C8Hf ?ajb>u 8cel(p J采eDrM[1fG0kn7(htcj3#$Kx F8s4#wDJ84#8~9xK49^نD]3ԚRr1lg/nQOK@+{>U/SuTꪩߏx:RnG&.;g\k~jmVuʛd=;ӒGW}w̤_097t4Cx`?lkܠ(Fi ^w*^zƿNu7^8{Q{v;׃pxg'U7 WÃ.\~ZvU:c`s$@}4DI&{&^6@B&\#ľ86D-.ohiP1m9<:c{OӭOwDXCUUP+ޓ} \V3m]v}"5Qm|l-8D|cN*.{Bq]Eϓڣ]RU96&u1Ur Ir|Oao_xmE{]6u݅VuoNxhWENT8ҩ:24ek۷QtWk$U|ZtOլ=UT?5,ծ(:h[dj&^!D!FQ3*Wr?7@ž(퉚e[w1ʪ1Fv*1jRQaKuCg01++iQWטk*>Gn2|4zLx*evŶB@0yX`];shAv<,<,h89<&fܫ;_kP|0 jc4RĦQrSsҨnCuK ﲲV%İ4[q.7Lץ:2tXUz21Yq~7o KTM䜫m uFh'kȎXN(' E5g4SEM֥fD?c8(W#+1U,>6wC1f]!>#5(`FMņo!;`h3*`uP҄5(m-JgkKYI4\[ DD,]l"K{ 05{MSMt|}gvgá8[5B"]N`z&0]XLN#xz[E?s?+4@&շ7@y'i(4^KxG)_=l40$W}M%u<[r{t];CKu0V2ɦ2%өn6(14{-wgdb[ێS|P_S7F-VesUkeywjN;TlǑvL? FP./P$X+vd]St/:eN&g|& 7re"l(Zk,?KA3 q <@8{SObx'ZUݻP'aJ<ayyy>4y~F<AX7-[#Z(@|fe"tꛮg|6ةF-}kb0}SF-6jNb^'iZ9>k 0];vnȒA<wh;shj":GV""T%^r mk *Uh"k+u*9$jhefQ4< !i+&9Z`@4=i.{Fg:P& t tԵJx&RV!NWzM ss/I96Ac 44m;G.mM!бe.ɫeNs'놡O$H% 8l ;}V)'3z)—Rg P5`>ӝǶTsUY<@dF dVmö:ôuZX@4RL$ HaȼjmjN!Gʃ"(bWƅjYyg2/(  4c,߱Q͐tBꏺ)BX;LM E&gJYƅU2K҉TД1T#&5w7$'\/*(lh &lh4?a0$UQv(R4w&R4h )Zk=Z)F -C:a+Ă86 yeXӬ*\kpq9pdVÍ.$8R|Izc M4}gl&d- $ Fr͂1#FCt6`!/.?wk*q(Q ;ĦT)W beSM Na[@}t5 ]ٮ~0ο 3Lm򁐩t@hsb||9cwkrLa;.1jԢ\| X(P`A8eS%wTbXO8l{Vl&RGSMPSe Y(5US v;dc;sX0,dYid)J8s_|uw彍KgwI 2ol-KH_o(%[rG"[}Uugz( /SYr:K~]0#h_"[ 8T%D`}b]"^M) G qP8ՖD gIx"*9+Z>PVx!F=JۊVt uá—xs|끃"ضo0%Hn/!`Pűr"΄B\rtDʆ s,<򖤧`IZHr=҃  aBXb#-IXM8)Jc'!W`1hcnt͞X$_! <c3>mGd$Ik} Kݢ[T:sVm[vUOYQmxwNiw';Awhx lk9tEI6h(t14*KwW9NT:/ӍGW{6 $vMZ͋+Qrv{-܇'d;/-Ϫ-WVu~5E+ߖӲ(sH}g*ހ~j W7(Z39 F#W} tJ V=L X(jcGNv:ZbĒ kl \sqheBp86Qz1]Wͯy me(˾eH 'xba!BaHJH3 ul K(AXU9iA7Z;Z|:ѳ|L:SqxL^g:יu&3yL^g:יZu&C1o d[{?M`|anݕ㏻VF,R ˺$$rPbKݙ+,R8{WacxPbx?;[\t6U] 6-w>xrV\N0Z}|TCe7@u5A~i)C>$+dyʮn4,Xir7w5w9g랤SV e(ተ1R'<(XSMLQ$C̐ 6tv]zǎg)5l3v,l(k;jmacB:á[_V(4g@5hE2C6˧|.˧^QDb q)oM[,{|"Q"[I&XåU9r JBH霰 hZRVDXbR X_W,A>MRe CZ ׇZD r ׿=> {- m2z_ kC6%DpJcZ5}@O̭l'ɦYripiu `|~=b$`Pkȫ2m,3nM.AަӺe{i}eߒbȿBSQ?w联t-'U2l։f<tO9B nӋ츄%+BsFyH *z{N7kżgmOE>G)ʝMpi{b{H=$3ػ~,`ś#F#Nm/}TJ=l<X2𗃈qd1" _ƉJŴS.!QJb@9+1'zXMCn,*qs4 ߘwHXe{8N\' ȸ8`, 7z@{Lj]v5 p%Hql'A36s7σOba0vdl#TfhnS`ӧqO(.HoSVuD|se$ C̽SA޼asÎ56PVy2| p@ c2/єpZzi3ܸ| ;O;۝#JC`ho^=Pl5& fMQ8r fH\c5dT[fP,Or䜕>CӀLt uxKUF;r q8Fi4gӧwLѾ8Z0;N/n^(ds;K'n@iobdhɒ$1j".APVs=2; dVf z\?\.#đnJ=ـx4xp}g^~xK^ɉ.j=KW11>ie\ml,[Ac;*^]B[2hC`.BH[JSZ!|`9>ˡ.62z3_+uo aYzifJYҜ]>EM6t'+ Rad#% 9}"g ֞M:`ph$OQI`ÍiIB9̷܌Codk$'rKG5 c>,r8mM@K~麬{U]J/beLONl+S+:_N5"!Xh>[l%ika$L U; 6]e|XlTX:#ո[ `8IkRn\K{T8!S"E9|-UW;q'Ӭ6 Niw';ќwhx k9tEI6CiQYzqyn=ݳYT%kzj^^y.!@Qsd7?&ݑc4mgYUOGu5E+ooYRʸ6Z"+m X/"qUbK>/%̵d$hrdaR?Agega]\EW*:L~OXU7w3[w:~n{Z٭ueU5ItA.vuQN-ug~Ba:{_g* j<% >Mn](.+˖zG9iyWD5#»S?wn~40ƻ;Nwn~ϻy7?/Ƽ jfxn~ ڻy7?wKۻy7?/*!U1J &_+0wn~ϻy7?ߛWm$&#K4ȔrБ)gzN6.`,f@ H !7ϹTMq_ b6{ Ʒ?^ r~ Lw +65wPJ#Payl_DX&J=.K&uX7݇Kf)> N ;,E(%hN Ӥ'\]n8~[bnU=6qc<'їK7?Nfe+[|̏=x3Ou kj\dv­R~`tT"@ HahA1x3otv Fg+)Xx{h`Z|̅ӊ[ήׇ\ A~ml(˾ez9oq=Kd8Rba!BaHJH3 ul \ BaVs&,`Oxw xiX{;vCUo%LH&5H8Jx"lTrx?ˉRHǚ"nbB%b8Nx"*Vx#U 1/ٻoug_eAoum&eOm}xϯ6w*Z­zFǾBۡT/t5\Ԫ[QuSD(x4]" M@aL #d$D} AimȦ^d[gnjw6̐%ŜbC-eNƜk sR%L\B,#+|8O~_*GIfOuXwg!]uzkKy#Nl:-]5,o%.9YE˛kvgz}gK|•@ ߚ~v ,\T6!U]՛˖ G9iyWlx0.Q h1Qņ|.LjBo+˝q|~'WQ90xi${$O+yԾu⩁9{;Q̾ϱkZJ%i "ѵa+׿\ױKBԺw&=>]&dbst\ x,7OFܧ!+ b$kɈ=( `ɈY+'SZg'#$}ۖx?:+K>VgF˓n$"hFUd1Ӝw}P6{7ׅ r`aL-0Y 5oydpB⼪aE4 l_?{WݶĿ :m2C]>$~,%y8͘"UQQ}Iu4`0@yh:=""s; YNȘ~Dv~`1 ̥\:̥\z60WL{!#j 銼kpH&Hd:9b |VpYmÕ@qmoAtmEGtFՂk ) Vm72ȨP%ٕ /*P'w΀ sJ+#qo!B^|Zߨ!sC|I u_v`n؉c-4#_>|wXW!6@qUݝ7@쯨:?^^VW6ߗ'`$S[(m^(G$2tx 촳>Rvw^Abo-ܼF;o/ vY'OƾB'al=dKw<e 3D[Yᣭl1;Ѱҝ*uX[a=Q@5Al6K` X DOZȐy|&V:h{6#$9{%xM4yۺ끤ǽKI g00Ea,[i62~pFdT'yc[>}P1lcy0z%Uφo'Gpp0Wne,az!>{7z^:v ]uû1S^d|z'apD 8-У=Ԋ,I{ -"|36nE$z{}ٴz{2{)}ϝ/H.R/?a [MúGetwx>^OD_ 4M2GjP ̠0`CH>Emo79[MlꎇL-D@a7@!Ȼ5΃` X'Zj+`i㔍c*aꪁ0iPTO500D=~ⶢ3>4\"1pmV 21&CfP_TQ!T=rLI)ϝca'9tQ Kmʒ0DrxkGmj% cy+|u 𾥟χ^s?@yrʲ5]|2wĒǵݩR;i `tW"t,92kb?Q>Z QeI}x$YpYkY͞aliObLz1&?Ue>@[TKMFTeI +C[c HVqȪj#,m@=^&]1kKj_c`ӋyB7Up5.7j5N^/q V'^'9\Y"ѭDF$j1t~myf@U`xuUE4hc9(,9IPT3XIʪ]b@%6л/0݁ g䌫Zd!̄\󐠺We3eLMoЄ nU]JCN0j:=tqe[T{ 1|cI}tM(jO"Qf%VG]nkĝ2ezᐔ?"k2"t%xz\ba6==qBj F]dJ">XC]s$Z '`Y/=zc2wc>}G7l2DWllO22R HC;8g[aQn3kh `/6M~F-)t=J`>'yEq[EW4/V򻂴bcN5TRğd1- ߞapcN?:{ǧ-EDpQkc_jE 4C)ő6≝j꾲ѫWWƫ6b7w!nox77j[?a9wJ*',x@[ىƆjcS':b+ =_TC&c%rj07WsgKa+Ms4ɹƃҔ'Ҥ!h|8&vI4s>WH0Z!؟gI 0M%+p8>$ F9w/$.yn;> {gN_ͨX$a}1R 'm?8IZ8f>;=Owr`#X)-p*sJgg 4E; l'u?y~~>%/e^O]G 8řYS/Ez2/d=fbۊ(H.ʢ/k Y)R\Gp,r$ǕYWQyLA4Δ,w5M[MN4]^tm^q$5\ΐnᘕU j'YqlPGJ.r/ؒ H0YM69W3EA55uX\I^QYG5֐et1K:^D?=JA={ vhh{;]@G}+9yCm: fc{eQ֑O?~բ*Lj=h-Cy.M ROZV[o?ݳNLREaߟϱ  ـEͤw ''0XV9I 3Qg2|#g3LVE³ k 7z٢ub*dB GUQ`"6KYbb²,1*4Yfs&Q}tl-#brWSi'dj.1G?5(FN Y$0_aIxEx}=;<ĭVAU,r9A]CpX[} 5 6ab*Y^ԪUDc".XQ25X , P\#dh݁ڎ+Xt|e3ppE R/z)JXKMR3-e]UYQ%RF  jRe"-RRqޢ1˽dKlqʪt(ei#YӠ FW9 I3&Ds$`)w Sz0{Ļ+.pq&Iz6y\KDrY pYaR$FxUV{㿔O6O6z,A4 O|igMN7TD h_=- !8娄ـ6ЏJj\XI?pҗKK)67)zygw&rWax |M˦@d *r\e,5Y1zP;d; Bxhi}3); dyg4>4 8߇ pBu` Hr0"pЛ>VH}/ml$WÈt0rnӶ\mm~ gxs><I:Xbk=M %%e¾7ΌՂk17L&L)|P$A~&ٹeFcx@KQrҟ]EŊ߈S-~xSd'K4ul,vE1/C05d-nX;qMmyŃ"e|*V4U|}|7(?CׇBa@O2'B X;v3y ;z.D); ,L{"0Dv\“HSDa`c#Le gm8:J[}(&2/nQV[mER#%J/E|98voDl]<}[ɼ*91 Ll#`;$(À& hc ?;zkxٲl^ømwɛ*ɎL?r#] Y0GrQbƆd9q\qXBca2`6{MG.&XnnI$ך"be 25SLIŴ;|6i6UuOKxj~.a4SMsuTA-/r;}+K7Z=knPGRw^=aFpߤ֚::ꌅՆF@SXx )$3L kjNW]X腣ZVce`A3#|H 0H"/*xϬ+:8?peNY ?lt&u*u%%8X}kMX=[Xk8W-?>uhVd%ȁ5U&&ϵ Sw@>/x  Ǹgt{J\nzZuscy>Snb.` "*9K.mmؖb+)?mA\DoAD{ҷqlϺܞoܞ#=|j\Ͽ$||莫˞jp,jʖ뮯*oy:7un8ږ . }!q'J5gGA;NU{sA2h6N0(v?Y! <7#A3.D *J"_k>x< r?s\7f']Z=$tR2"Nb(u"aUqJePPywgtǒ]*MLbʶ2YueYi9Y6g7]ˠRӝc`{ 4El^euQԈOD l*h6ۘ Geح&@- |%v%kA$ ]&{qC(žOt9!ǎ:oj&Î>bZΨ*]S-[],9ޜx>smjv̐FL%)^7}'& ͉W%{ s*;Zg+Ke;nZ2ː ڦ ̎F4U,:6;X$ 5< liQwL\ߗMә@m꺹x>輰"Ng9kz 7HQCd> plPXT LUx&Uk+"[56}S!V%&K9[Ntiؾ)1#,)PĥfH#.kIQ-]Y9(6sL*+A%[cf0̙€F5X[ J̸m[+Ng/U}_|a>'_F=U؊؛,m:԰= '])ܠf0Ұ.@[j M{@M/^D,gAٶtzjjY`LU7He5˴McN)E|lH40d| bo眴[`!U L5tlh:K3Xl.46\(KsqR;ıKid3N>tç<*K8ں.CdR\9V4aG.Wd< ˾QETY.nڧ1:v\ml.ڧӜ㝠U8[m(^nԣ.\i(xwxp{P?>?pp_?n|a:Qp]]„SńQpC4҇W?>G; Z᝗'zzܯ< E>&Bl{{kU #Cyʮ}hرwyM~DĨ no~R??;AK'fFz=E&vKz?n"7g`')?;<)rCT|L[1Y`z 6Du.Ay oI\᭏X> _ |nX~;䃖S]v^{psRYy ã8 Mm=9{En^\轣Ma`aG :{N{AԹ>lh;|;8n VB-nmaD=a4SwYOb&.D^%)vFCo6[0Mn7Nڀlχd@ (3~D$r~Oŧ^|FBe,Ñ;ހ4 :x'"='FY`4'}׳~UB*ϳmJI%phes&Dj5w 7A1kJWlU~pE?ˉ0GTʯ''w[J:pSOU k֫ *RHt X~GRy;}W?]I}z֌[`ܥX;k#I]'\k'4$ŠMq:V{\;Û$˸DۡR|0*_ԕjT 8锇Kh*$f1uZ7-Z= V"O `6 R&v .?D(0@`^\ Dk5d+/ Rl3珂~ rmrB޻'#9=} *"~GG[̂zoǃvM~\ P<}lrʊ+PMy Wt#Mi> `B$'˗$+WO*g[)oQ]5b/.t#{zw)2)~4sL j ,8!A< X z*5'ٴ EۥC<$ʂ i0ZZޙ_0R3\UW5Br1ٱKܳo SUUVe1 !@$Cw%Rۍ;ڇ'b i1-󗿜?!|([,Yvi +8Z8a1L8ÐM3g繢3%C]WAlMWrϴo<YQGW+EtSuTMmK=uڮe )7nlچrK-c$M`M+ZAEg{ C~ `6ҫy;Wgg4Ў5qX^%FC"fdZR\lgGG|T2˫,\`UQp.6G6PanRfڸ[Lv5ͦn<3G\Ot7/|%R=)3qݪHRIJq"e8)ihok_}x^^T_Tp|l5,ض- MIUuO7LM+*L( t7Pq>yKΜY)%/5gdʷQK$NrNE*Uh ĝ\G2 Hfif˕GկP2?<_[X,t-ߝlX(&+RV"U[ٱ˵aϹ6mx97?Р-W K P8cB;eW&<@aޅ,$ip-FIj~(bCGC y9Ģ'X|ts)=L OM]J㳻qG/ FmSLe[oP0k`,]޵>q,EŇss!,;r#p BDe=Yf fW@[}lC~8Qhr\km2 7zp,' yp!bƢ50 ccxgMEy;+6҅gzhXɲJ>%˺%YVq#XN4HZItoPi ϐjY5*gk)7A{FI9BwLXBqwC46ճc$q0UM4ۣ `'7rrWUb'#sn)n'8f3[;g$RBy6Nss~Gfa_?CлO-$C08AДGƼsF;fglfk@snHo+b;yꎿ*dvڻ~o'p$d||r;uh5Z Vs\I u ~?;'Λޛ7nv?t{{K|;{_L JPQI+q"ha0 4'c'Mi/Z0icF&zH{n,lm9o_=\(HQXewt$Ǚv&ub:*cb.Qn;̌Ѷ{^u{ ^Ӥa" /)n  0!Xf;^ͫ/Ae$y!1mpiRf_7 "9owW]peL+3{xm};>nbu3CQC0`Wz#&GDW%(뼢`)/D^~El9dCࠂ;ʫoZeehac<:"ۯUwqJ$)f; (2}qEL `elcT1 ڱbсrO'aUiWCf]DK;MRƏN07򿟽u~>an`U[f+LIH`JB\ESh|$Q=sW烻~ st>"9w's{Owc˔1g^|!(V*ҸDÑ(2u&!kFYJÜBmEJŽu~oSP[TrV-+viV u8c90JWV[qhy C*xo la8LdVsJ0K ܥ 9Q.!:ɹ0~Ήw8=:kգlcgY?]?|2V?OR|YɝV×Wu_yG[]־׫ZV_p'rښbߞeT7P(ׯZ~vTg8tI2yOk/f1MC )]:wDW᫉ꇲꇲꇲꇚߟ&=  쫫>4kO }T9Iot|_f?n\U{q^˨ GV!N!˧O HOwÖVxv,ڂӦzDChEZ !{6|,OM[}(ٞ@q0oϏ^}Bumxg1ӼzrRs'3Ѭ}??v]WTZk jV&ʎ tXhP$@U<(0 ddrԅمG\SrS!~<|Ha|Ԁ^GeVE=g&j9Gx> {9z_KIdt]@ om3'RFFeu1"5xLCma/PC7Vi/TFRY߃a12 T>5Aee~շg՝ZM:*Vvn5]xWoώAqm/"o #]> s %mT#*.u5YO/$sJf;10 Dj!c1!yI oIrMH~jaU57>^|㰰ܓ5 r2 DU,E:ˊ%K#5wE<"G9|C<Rqid$2at2!zI# oIt҈>:(b "bv EfU-NYRAβm],UTaG*p8iO" -V\I Rs,*ޖ,2[[4!O Qꄲ%-+ގsSX-ͪr.Ii< K#WLxAύ֣O\RAml'?r'e}BR2`71% 'N]L(_Pe[R02E' GD-&>$bB&d XcԀlNS ]T%kgMxPQ'N 41`6'$ьk 2oKf&FKGC0.C,0˄$H)a dJO= M4= ER[4m?Ԧ9J)̶$i3*˼)G}LH^Fe[-QO董$qРSp>R3td4D.1ARM^qH;L , "R!+AFP 6W@Y<#ۚvt!ЂXlnA( lS˳j,*ޖry<|@㓦V7Z_cG6xy<>q]LlNyW}9ξ,Ѹ9k]<]Bbx@gcRP MJsvM{x"\ !Yr` ;1HI"&3nUTjx[.DqIILᇳi=UL,1|+ug[%W,"mȒju_4+-82L{+9iMps7ݵ>&$/iAmInTZ}NV,8M"#fGnC3NO j6O"xІZ(7xK6 >6w˂A^PEA?Sh4V4x[I'5!$#" t%3[P$*̴gzPˊ8!un׃CBJ)+A$E l7#1{/k4:E7kZp?:A6ʕ6IʔM`^L؊1 Khx[ Wm5tunC؋Ju[[p[Vdy+E&ƀ InLNyEs:k<,*r%=f/rj~hnm)шٽF%HoԘ9/)ޖRXd߰zy"A sךGg@TǨ)cvf9Tn9oIe^xqI mq ke c"v1!(2j)UFxQe|:r6x &1z5Yb#&wf۟MY\T-_Bx{d0K[!0cL9}Lm4Eے|7xKsXgjF^EZNK;sgT-f%h 'OvA.*ގՁ3lPtfNk|b:}~HoxWTx,x4d/OFh9NǠEk5隯!y1pOXzNs>ݽArJ̊|)8%r ϝɣ_jN/)F&^G$XߛppP9I*G*f>AKON4K |ƙ1@؃?wƧCfs$8yS.3Dڗ}* ZOݵ6r+bm˄yX$"{اXlXI'm]ܖlA2vUw"_}GKǶ~7?o{Mf>=!@ۺż,.#IG+|'t3f .[crK6Һ xZSd\VΤWLĄU,![r-M.}|Fp)h@Æ4$vZebW"Bd8KKN*r?<8B 2&nvV|ɤ!O LQVIK)`XΑVADxre%J'T(WaFiIioT#B8ZcQw&|#l ؈Aeq:ٟ.Ý6P3w`Gǫ E1R[Ht090"$ƒ hX3w rWiQ}#$ {eJRuyUn4 aU$T`E.PM .VoΤW| YA=`FDgHVI+QE5uD1xb kbÂu*,`3"ެs9`@GX]ϲ#+{vTU~TS6ZCt\)XVDfߺd @^ C. Ђ,Bg4z4^SFCM0얦;YAd?77z¹i@k[wEg̓r~ t3Jh=s=x|Fs"ur"gq(uT- r.jEe*WSJS}i e:*.P)^䋇95>U',zqLp':*Ɵt <. > `k2d1$y oR{d. ހzzS32 ^fhd ޛf~o2 GĹg) |drttl$AGl71ǛMMt˕:Uicެ9'.I/x qw.m [Ԑ% 4dtZ&s#i\x>n2u}x5 ZUb]U??rD_b\C sIy iZMVhYXR[Gn](i9iC]lJUϋ'| !nȮf;LFjsd0CV򀎚WuK?ONK;ȥX7Q1.uŹٱ1aڂ3h —ºJc'Aa-}+D9!яfp`Lh*iQ)8 kr&/l9cU7 CW@V]@V%'y0czx = kTmA@DEX5d[QtG߳}OQ1Z7:@P\G7]5-}0\gBCCj 4װMIo!LxئvsJc[m"mC: D# -qlFb.҆<2|MES|p횂=W .;.:4|u z]dkMΨW\DnX t1DyM\7 >δTH: Y3B7 .qJ5t޸Z^-8t"qF#+YU]A$0hXJ\vBFcKyKpj ijINOFT*0jw{CdtL> OtCOp tT\\@QK$#"JK(Aa] A2'5)n7?1c*DtT\3˵Yoow@XHZ_Sk)xcpɧu!\34|\qy/@jFll!8Fi"hw:&6|tpc +sP˙9U>g+ Fv+Y-}ΤW@!D>gcu=}R4OdkK^a9bcbQ1 5%92Ң0˄$H)^  A$Oӡgf,ؙ@*3}-2bqـf< n 3l/f(%:*NU1 Z\n&O=|{f<LHe Z 82.q&`׷- 2l i}w*޶w13n!utZ RP>iZ!JIyԷo9m :*L6 xs5tf*r&܋;Uq(Iq Mܶ+dOb~l3[Lq^2Y>-^N_V[4 XZ IY:!)›3ƨb4H5)$(\h:3*䡮>֨5\=bm#{zB:*Ϋ 3~UpR}:jNyk @C27JLyK˪$SP1f˙ߣ(qI 記QBƧƗ~.vARD 8@`BjngԒ:oy+bwE҂I|B^E!oLrCˡkjtT\XA{o ۅd:\t:*ɵw:V:x6hIh@k2^ ir!VkwN+tN·]&#mhm2-'Ҵ\5 )cTzMz#w2|mE=o ӱB%$Ȑzׂ)UW'YP0RWsS 9cͫDi ; t@Fwb ﺕaEݶv8MآeDLX1SRws !OqX*̡jU0]5JUT:Қۢ7a5k>7H-݁mEF&&m71dXk 6?2 l䀎`VcL BSVV% Șw23:*3 A:*&5`LZkPZ kRvo~f2jE@Ugg[}WTr]2`kҡ۞ "ZPC8@「< A@ǚVw ؚ3LwaB0n¯?7_Z(Blgcm[^mϟ~/LN7:۰E')M͊]|^lQc)6cSUiǻ_# a7,bӿD?&Lt Mtwf~ ҏ3<4co7g ]"n-7y^BӶilºI ʠه,M@z ~x[ant+5R.3L\ke"ǙɮcVSRMڶNzV"QKc-/SLDKcӜPu&d?c(9imkpIhG6w]u3_@GJn˜"Zm(FSv^?w%oŻon> j1K".1,l_w/.3گx<϶Lbً%FXxǰ ؗ ':ч& 'QZ`nb%/m~Y*`|yM*+_G7s+z@;gS :Jvsz5|PtL\>7ا+O,=h }VpzS@Rؖ=|vek]p[_l8'`Rj JB-z<:N$z\̶}@q%u@" Sg" 5&Mضxs,;N=4\.nl붣E=jK~ .+:fcJہv8Ќ+Sj3q.1^a* Q8@O@:gI/l?| Q8pJF9iI57qԫE{ˋf- LH8@:!*D#_.| *>,d`osAP]Ah? z]e /{/"u1' Ƕ] dYc~PEE @ŶolAjޯix}L mu3#.6LܫK졀:y; !aLhToTi0J Zq8]뿬fE!ǯYdm 05gѠ6O 0bqp̵eEp1]KIr+ ]l`~ { bnl$YU$Mhf53nj`#3"21Uf⤨ gn%49⸁Lds/>(vqe;R6뭣\5CveVqE㪛Ֆ_7P۳ߞW> aXs ȁ>횫 oϼcۭy{]bBDw/E|:0zBojqc06r%nK@!q7 9%:P!T7B: vxj j|1i^IBo#Qcn;]5 MZ&*܊jH&4ճ KTnvDM`3-φO]JDxeIك T&HٻL,,7o:<<$S avU33h cL=Ce+Bm 1Bfe~!.,u׈/:FdDH k'EVŠ9p^7 u:6p:Y$$1Qt b)s9KߏFx M:ۇc^VS+kuy;!QnĚN>B`8osB˧%ٯM[ koq|ii鐤UrxAc&LDO\Ϫj&cmJZ7MߋXk3jP\==Փ%%8s@{-"~]&{СWRH#eRqՄD&JT]\ℯbʴniJ6{뉏  ?"cѯ/&h1f 'B !Fb%!'r O@+!`_߲128c14~tnzKɑ@xiCmwz- ӎ<J @Y )xiYȋ DjeҼ:VB]n㨚Q6,R\fw6xϪ{`*ɪw% Y|4hY1SG3Co#Z}LKYc}_ -̺ ` i dYy#+xPV=m|Q9!Ɖ^|<~a6n)% JݜGŀ%ڡm1Dx ../˹o3yΏG_âQJWujBLR6g["*j@|ТQ>_Z6FQSutuqe֑:u\&h4j yq hݣzߣ趏(z<{|/ӇEBLEDO\oxT@Hyw, v>jȹMX1 2I}J ?}|=4Bu#oYё«a 3XNU~46t>x@orI݊ђ!v6ў~\L;rm5YSLHu]w"'Y,B-?G'|Iڢ̜V%u# gC a+AAV ԚB~QB3s-TfrSnalpgF|YnB7z*:!15&0(bHƽVv#e0xv?hNL}&O|>IGB^oQsaޒ73 <6"RD<Ϛdb̆;Z+|U5 JQ]z .(zKζG!##V 5wm֎ZXcu{ <7GQ/*kӥb-a6ư')9'W)$AXnx|Z ;0+K.?}jћm>Z6F(S3֔KY[HK;7v~w(F 0xVV85;rws#cF;1D-5|ϳ8ܖ, sX@Zh!nS?!b2u*d##Y-;%)&M዁A <+ sna61?cDL0T7-^I{1[1'm 燠e0!H_jסGfcsI4ooq>3E;8Xs^(͈1"8FԫG%ZN̊z3# ^j̆R\ ćxN& I3ڇx Ab~~ZCz}j[ƕ*,q_%KCUu5ASRe^:6GYw#6#Ŋ49'ֹX.`yP"-S>!~úoK^7APC5cjAMYtHJpNf9?| }za6F&u3&O 8]&?&ߢ˪#;דQR͜jqn{KB] 7f~o@n#XPsKU;CM200[$Wu= /4ͧ- z 1qe,|WxEF1i)pPGyPٸNb>ʃR^b&ӧ͖wN]|],c"/rqᡱ;kscXCs4L6 ?]sZƨ~0_Yz"׿A+cc+KrykG$h*duW]D5m8dqiVu f*1152v +C[(&$Lf._Xcs/~]\^MwkoN+_<\cW [./|s2|ϋ2^zRFi'ʎ+1jag3EOcϟyg<05 ci~4m q.?@FțΩ"qyOͥn8t4X$, ]%կl3gA>u]A\ʹ[2]gL[?,CX2CL+XS#>w z$ P?Ζ=^xQN vD˒Klx[Cye{}~*|Q_4TDxeTF ?闵sB7kWz}\8%$fQɒ.#POKސu&;{7BnY@4,oqOh㊟a 4 vE~Zb^8r>1m K6qtwA;J@wM4ɴk 11~uh0c3$%!'Y$QE oK Л`D;6ƑA,XpVFbllzEIjH.}= +{)}.zksxVʘs[qo zT̰-0yAe4BavĨ%{OK=g]6Dь,c X z}>~e:-$U{Y9'm?8"[5g3.|l§"ν]8"Ėo1xnowyz?-۩w=sx{= i[OOM8_zu>܇?䇻n'ޘtՔi~/m?}_xh~qqowϫ>jVwԳVƜX4Rv?jv[ZwG͠R/m[h5vN+~`j,q\->_-sYol^rJĈ:S-HjBXaVqǤQjq@XSu3D6M4K$hy~N1]c>'|7wC`  y;eU"iW/Dθ[ %.d(C*~_p6a@lhVt]`|6ޱ6lͨ7[O?wuP?nf N#zx94*&h&\Pυ\) Ț[Ӑz DZ"$BsDHFmD[Im %vM5ЂZ\ZP QAi7`AL4f%%sU7iⓉU{(iN@kz`m+Am+X6FHԈAgc^ƴ*0'6q|LޘJ''˧y8moq;GQ5+ bwMjxѲ$cg_ ^lܢ׎.^Bb㖍1$*GMM;5MJQ1҂̜VՔѷu1VgC !=3޳6r#W〻Æcͧ]McunCfS8 Eqf7*9p =|4Na**a%z! ')^RE0ep!,JFUƼʤכ2f/5SP:$>̙l' apL3^T`{u ރl ֺp[0¶/T&+zu搀;sHNU4Ny2AI2Pn@g7TwO$eD!==vRBDk2cD%Tb{_^7oe ') \Syk4?^8;3w"wdUa^d%6%A*bA@eDYoʃ;'q, ;+,rkMs=)"\I=ħ^.l,M1)c@M.-:{S8E#gVie]}&qOn>]ɜ8Uuxg\`ݚ)5O)fTLTvɬ%ɘ*$ZݖAէ36ӓeO쾢83/tf2{T?us1\8?I۴ O7:1ST†7XgsEC8{s`ʚpu,|[&Ǯ1c&g,³t1<; B X)/YsgvB9 >cy#l} oV,{s u{}DVP6#Af׿ơjS/f#/g:{X6[ y&./E [qB4?l K𷳯sxb1MWx~;F5H٪ f/4ѼA9|ཁyE_\]s}MofRy`b5jM6SLլinU3Ӱ0j6%,_\,) Em Gs>0tu5yu_-Y %Vߎz1V#?W`ߌZh('~./7$R9_oѧb*<A#XτY4Αz?ݜ3c2W$N ;oճΙ/ͶrS_X/Bf#nE\ܯ/z;S׫\}&~|Qe 86#+ C[Se@V}=@%.ׂW%3l۟f?J"T-€f2>f}_vyX^{,Ҽږag1F">oj;=խv7;8K}eC-Nͤ5mG&0j4J3D?惷׳ VH/WO֓ű;["j=0W+-Yq 5E*@gn*//7vL|vU}"StgGS؎EQ`lZJZnog=`1k~6^dF+!e)Sx! yoNw+.jQjqϓTGTޞFҭs`mD5ȯ3_M PK{벖,憕ġ`t>s#Zp& OC#D _ BˣɳʀDc=71%8(.})$x'KDb*)ɰaM.~qN/*} J$1c* D,&jЁ___?jɪ) J 14uK/7* m<I~KUL˯- K#ɏ7MYAzFJ BGY*"stǰQN}H' A *MQ{p K-7P|]tąab %b)Lj0C[J4dDax%PcLNa)D}?}ߦ$Uđ/K'+&q-/9 R0UHKE/X p eu7(%4}gmH.9)1J(ctۺycXl9vԓE"Vh kt)=.((Rp;fA92>?@|xGD`q%86֡݇a``ھ@|h:pX#moT0[!``L@d8DPZ V" ^naG,fIr=Il+U`4 Wb) b.Ӫ}vIID"c+u. p2P-X[Dm]O^_V`h(ֿw`n2&BD`1,qkeK:I'ؼ )lqM7 LuNjv%f%xP j0Ey>$9ƴO_+/>4q1զRŝ'ԽVDأ=8%-dJ;] `{Fep$ Z/1ȠE~c>)ޞŘ3 x׀`}2&Fǰ=x}~zbVdBQxF% "#L;oei6cBؖK/ip"*ԛ}sL9TQ5*Ce=wX:\&ף7y0T!&urO,YAp5]O^R6Ǔq=~^;hm;,(>IvLR@:=?ؾI̓ GK22$ >ݺxI- ¿oWS8ԘT  ϧvhl0)l:Du}u瑎y5St% AZ*jѣ%.x#|^p()Ǡġzm;4H٤ YI'ٲ\& e]d0%nM4n\aZ;ER'}5N|6Aw3&&&=iXZ zWcI{+Bc,j> 4fNcLd$<݊eIJvwXe(7&KJ|w$NQ9|̫,Q9cg |,jÇ42Z6$|8ĺ,FbR5"[łb?ٕ2"v4JaN%uhK\s{x(Pg[Ps_@| Lz5ib) " Fs:rzAznp4KJYy=0*#Bgo *ZO!h͎iN"u;uQ#RR )g)dh$@XdcN Z'F%N6 a֍5ChIHbV(lJJEyկ׬ upy䜭|d sggu.r8!:Dr@UƖzo.tj61v g9P5aǀ[l{.rH]*ia⑀%bA:3F(J0ǍB_s9=F%>j4ȟF QlǪFʲ]1Bm`ӚW&EV5)դ;ՙ8V2J fcovcV}y ?Xlwyr]D ô7m9f~F]\ةY1+(\8t c:Ȇo~-):?h"+P,'DfmSƊre%Z}ϸ kt5%4ae oL~/ zȩruf2foٜ8}Wi/{n:;yL&!)=X++؁ÞXab, å*&G-~H' Q9W4NE:^"1s5lCx%ˠ;xet%òP3R^+cBt{ pe+Q9π@1p0|2.Mn̴;Q1 hXh1X2F0Qcq  5V]JGhe?RӱI;2rHI3n@Nz(jJhdQOsNTvCD&F Y~wՀAYq:=0*#GɣqGtkYǧhZ!cRo-ƙo `S, @k)!d!x `c,*8ȱC ]7Òn]@yY,=szeGUwN_.*v(콰$J{B =6 5"-yϨVT4v!:F{nGR_!+wly ޸N4x`H>Y{}7I1 i mezI(5 5?}Z\:]-f l?am-rbt 2 x-!aXEÎ6 T (*`7w ]>;<`WѸaj"AU)~d@# T0Lfc0kS-eCV^aQl踬 Hq@YX)Y"0AJBlpD \R{\u2r49ݛ3+dpc $OՀ@=0*#20eU$*n,^f//FKIZ"3" AA0X*8ј1c,ִ/r4 0?iқ60\W+0wئO#o7Ryjťx5 ՛WqkiZԿ_w% f59+E:qPBQTL_7#:0oaOSC?uOaۅ.Y_ <[(vc}tݼc0nf;8?wo ̺vvf6+3_w?O PÎ]̦:oF` 0 t)fJxڱ䧎Vcg`NL6KR?!׋͢K/yO<}7X8d1渑0+\(K56)BÒXHdD/( Y"H)&Y* i}ɼj?iۙssw[\Ƅݵ7󶻅QjLwl2@L }K%ܸ~턷p VI9 ;YOO`r޿֭wO#$Tk>b62|wx;}|5];%Y6󝱹>kv;/^\#gCVsݼPdhB\P.xqߑs̩A&X"ńS{" ^B?xq ˇstXo?5yzx[ovc须ܑ_J~ecr"G~8y;Y 8_:m̀'Փ5_I[gNSHc'(CN5޵r}zr|?|%'0Z@bӾh`NZ~t7tLb'gu'0ٙL:>ߤ#o^}WiobLEb T7 oA7T@o~TYvغ fL,(sh'5=tx geA%-{gJ{|q{{#~)ɉNݼo:>~kpPw'6ݬl</y;jt&fzyq |{O^O'~)-j>ꏧ/=Ç⪔#b3'5o̺z[^ xH!LBj}"k"A/ޣJ#g|txOߧYRY1'X'<ڞt_E鮨k7ʝyލ;4u}c.Gqr<SB=WK߻teTYc@*h,@BA0FT#(6!: mdV~>{)`t 1RG":U`J%R5)9bl\ ‘p0b0$S$6%W} 6r1#λS%m;~;'FJb}M =?rQ>/_;Dtk}s`2\)"Xd^e0Ӝ`qp{K_E9pW&wgޭ=>:ڥh^ohlM d \:ǣ#Ar8XT GVyɭqR9c!|/Z/a2΀"Q ` *:}`5:F9VX^xB=P{b;tYJ@O$_u NH* "!Dc <K8CE EaiUYZ~:ŠORRȨjRJPւJi2uCs`9!J)V[bH7pDR) $Pd","Q0t p)Q(,M'SXucQYC#U&:f_=쩋1џW@JyT2Qp`C<;T@[ Ҁ"FJrD3c/dȖ!6Eޕq$BojKHX8 6;ѧŘ"RUϐ"uPaHp:"&^),Do v:Q+,&(2\OhOSpS5Nw%B1,G lW++o^$Q`A8 DLFa/5zGH5gFĝ"{'줪u.i|}DzbZpe59Z!Ns$phӑ\rnen)0"ޅwgKW4Ig !s@fcY\\h併)L+c<N3cƉR[VqոoS'*8q">6_2Y ׀5T/e 7Ex?pda'$ ,Kߦ2*?քg7X҅2k-{r4vl斕a:w[Ԩ%i昺 S'M6 (5&mr.g%mr-6r.mW6K.&Mb+cԆ&|ݳ7=7~n"iʳ!%nuO=TpGnHovFϸ%zO+L#7zy=*^}7%GGczk4O7Bi )nb) 8PbMWLٝ NgV6&dtij|Ӽ=!Q9oqӖ wPyjky*j -=P7oκfB`.y/5p^@h\;OO=x>ݛqY_>~;9ϯ{r`v//{x:x0e?nwJtή7/{?/nR|~fA&@t+/ǣѸÏ'80//'inùZ7~4fxq=k蚫K9Oa~z5jmRu: Fo[EdzѠ>Q..Jp6MOIj~lիMB~<:nS*%t}3"A9 {5z>z;y1l鄵!sj?ckML̚\XQ׌4xj@KYW/P`uPo]$YS0*i]2ji^U=[81eIȽoQ٧t - Cǧ^.5AF =\Nh ro8[< poe:w&@cg7y?9OWbbHԖFC T(ixϫo;}ADzy} lUb͵,땢ܩ&pX)(B:`.x;}hQ=(rgUb%W`Bk[G.޷"'RJ !Ia%Z)A0 <0V%c3RF(٪r8PEtJH!GEr ´`ublڊ50X6rZw:ԺVj'%.)Ĩ% ,z4H3pZּ\9!2g5hpۑMioHd1n+ n8o~uYdV$\ eGn;l Ԉ!ogZ g YNz1bw鍸.SF 0ˈRɰR{|R`ѷ 3W=ې{-J##"1 bZAj#Yʹ!&;GG%:u+B 5Uپ;IN!P7qZ@aieX_$18C&'Ap1)#YwhVXAo@r3mcptVu<9I}p型"4)H3pp  u%-/ήWǮw3ed5\nZ']xvLpR]۟O ^=FK,pڲ [tMevZMR* YGi˹Y j 2E SH9.-T Ik/(K*Ii `D$^ԄK }p7ߥktmYvƂ aAK9ׄf^l ݥ sXPbF낚A[ԳV}jמy%e>K@ WV$q#UB^Rwd}?1?*ETA<>% &@m_tyx7%}K\XD o%q d(Ŕ(WN3ÓYܚgLXkl2yp*m{~_`7S\j&bߦw}cDZ.ؿUgs>rKMZUhg^3,H,#SIJdbRDe:@+-TPˁ1[X1dIj-#ҹ5bH ;PxhO>?ED %Ia5JϥB"7 a؟K'D߷!Dp]ۅEQH8bILU ]Z1]X3,`I6H&~,Tu^aJB ⵬nGT)Aˋ 4e\xE+ʁz# Jjo,nCҏnCy4u,[/}݆˅Z _q݆S/.> ²)r X9hjlFա Zr)jkLΈ s̡^wiaLtqX)konG~+އX χ+\פnh(:C:H/J bjrѥѐ@T*%9r!b$̰$ydS!ťԬq2.٢Nƣ'~rM;m`x8Ԕ稄dPB)5,--J;Yb_G @Q/ܭc?>~ jaj|\AH4rRht9+ )HҚȍ6۔uL $8ݾ-lKi 9ShT&uU$l֞(@.E-wDI!CB7ë.-ԀWiQxUT'2%)Epw7x `ݥ aXU6YyITq^Eœ4D^_եă:1{B{0eKXM49̃~[@|R|kuXsk$zk-uza׈ϰ^qDu#b=}ҍg~L^xQ^Tr[I(JL;JZ?>)J綵®!%%g'Lhf{ͱyײ{ %lVl`r<4(ǧG7&L:_~u3 _{3=.ޓqd+ cwTbc'#;c>2dFcTؤWK"ՔԒ AGwUw{U7 ͱyOgycòW 666R"пS7zx0+l7{y|26`| `kG`J b9B8,pNvn@"gz'*(' XZj0(_]tpR[}O/=s31?c^zLqL y-+-$BdU6kԮk 4AvmqFA TZ^=$)w!tR^.rOC{Ko`F:-y^ebJ“;B>VׯQNVZCfztNg6̦9k~]– jZ"ϣ 1o1'5О_\7V M;ݚJ+: do}jfnCw(VVTq;(pAlOw_=t ֲhб"~yOi>`{*˙ڳmݼ|L KX 9JK2LQз p$N"'qY3"8eۮBkM^yKSs0tn8 Fϴմew 땫F@GxΏOOǣ} E~9Oߌ*)^%x4qb-:*rG/h@h,VI,_J_0d9s͋pX|?>pY>z ' kfjVƘx8}el&dUhja.@O')Qkz:\]PD{fd݉[E4lb8-^=/05)5V!]UJ|pϏNE}aK^-o!(:+(hP1$i#,m;@B3;jBhDE CiRrPcmȷCy.E'Yeuyד)f"JJJwBɫt ,g.QtQXOBb,rԘ!0?)JZ )i9n$feoeâwy['o5[s^g6b8|w'qdVN@\?r]jϽ;0J+)yʍ~;#A09 2Q;ARڂn LúD$d C<QPEb#m !]W;aj)c.F2ic$^mV9&yj6SLbZQ|`= [di03KUL}| O[\uN(TU_A^_|trA'uD0!bv)mt_@ ^B98q(07rN1F+UI=鍊:td:YHhFӮ}Άcn8Fv /%Bt#oeČZ yeXWd |~z  g?/v5-)tb=El[9#u|'>#u QWۑ)&@4ZJN0OF!HX4g9]BIccɁRH)G~// ;ET ύbTK}X3xr"=(e xJ0bVq{Yԭ,f^gkyOPU/\%t`\gl9v6={6Fbx͒J;7 AUB)&E_un0Ji3f- YLVboI,s7ybi$>?[ i#s *ˣHՆKpmp֭12hSr~UGpF Z瓕. NaƁb*MOEsDs:GL8|ף,98k/_Np^}3^BhZSf+Z wldujb;>dUj6E {b u<8͒c? ۼ3^.и4%|mtU [YzWگ 9s\+?N,*=͍'WMpy-mKY½MR eEcd2?޺); V>ZУ. "rm"`alU &Q5D1W2ȿAiÅ)[T4;OZILGuWɴ[ 0r~tPdl =ϟn'wqFˀ F&^ }>)J*6EFOU~IU2<6#)m%Љim?I^L<*2>0Z/U~Rio8F 5[U`ɨec:(r 省F] LPç~-^y"Q͐G#]cL#9%n)B8Bj;FsCZ=Z' ka\)$D dY0HX6wcuѝZDXDP =wl 6h["XO,D*b)QQJ .҈xT`Ge("v\t~0nE D#-6'w(5"WR@ CD BzFZwZu.탊:qP8kiV D͸-< W,ܮQqґ[%Jp2- )|wq/';~oVNzg)'MNP`87i4T}77o?}o^AAq_z &V~z18L^ xo3do.!4 1,O2t֌Gsr5m?fXZ1o𘩬8+s^GW]!xdK fxşY; mٵdA)XS !:5E|֊)˒2,Qc]5n.|0jjɲ9U|ֺ$s@6S2ddv4K2Mdv:ޠUgVYZen,[QaPIml=෮.L:$CeKVD4TfԳ-Ѣ[t]ӑ%MՇ] [Q75 n@cDpa_H5$L,q+! &+1L[zqaJzGFT{0i2e!aZB+'NY7Hx聹E#t֓ۻ }r8Mˏ SmMc$yǴ'3+JZNE .Zf*lo>/cW1(oŬ}dX6E8ݧ=ͧ=My߄k{6x=C5qIVjSC5y]yߍ'ad芢ULWVw4ñ QESl5jE0eSE,\]-˔OIu|Y=٠۾dkT0dQڰM_mG14Ea{%ں9M5K!GYC_-O¸ em 1mɣgN%;)gm 0׋FN]aHg6O?4[ͳYDs}ϏϧxfBKXa}3^=QP^4NB{>ڏۃGJej[GQm+3hXv o&-;czdn&G1RC *D"(wh;"rHݠcgovylf*LQe]{2G1VN P~sz{A}3YF]+NȽ,Ÿгѣ8fl,#; &FBÛvkƴet9:!'B8B"f`Z'C)aŰg+B,*GѼȪT0GB=k\\nlI[b`H)ƫ1ts0U܄U:D+g;A P~pzvgwxqs݃wܫɞ>DSYX`%w+㫃0&k#z2@~etgO]/h@_ܨZ{>ho ?"{W4 QZ_v/P1J3Q1~ћ{*[Lt-lqQMMUU·\UVJh,TDvt4C}]4Y}W,qEM(Ae7DGg(.{2OdNt5bŮmS: :#aeaںU֝*[ bq!j0~%'4l「.Aonw1|Adecʅv&+)W)OoC%R BVJHRT6Uɺf"(:TuY9("LVuM[fӐL76:-=&lO}.uim'nP;\] Ӱݱz4AFa~{H]7{byQ,95YMSs㍋p}t k7E ͫ~`e7ͯeZ'o(XIɏv6qƦխHpf;6i]< #@ >~=xܥ7'LJ4o0X؋'"/}VD QA< >Qem8|K>L VQ{M|KdyGɊnmMZ+Zq:xhI#3BإN]42Ut=_]T%s"l@Oȏ_"'&BCڠgO.0P""p$]tc@$iu}>=2hk8}v1[zӕӵ_\(cK[dz :t\KzSikRv fCX3$䚒& Ӻ($\Kz[e6]Qj&2:Q7D 64 ^/t cgq σm9;߲H^yQY7A,v'ckL-z⶜MvU¦G48@%4B^&v"GSx lPxҕ%BG .S'l>>8Qa+ OpvMD1YKe\'{}ݥrwPqć!2V wr{sɬmJ^ < K %> K$&OE4,M$*!._ KͲ Aaܠw ȸWGAϷA-fͫPvGoqh {Syz۽*y@Pfp%"3^ӴpL 5AZMDӰŢig%!iJ^j}bgR'gSZ,|9MS4:Wzqó$ӵttV_f *i߄Er zr:v7.BtՐ5n*]t:i;bE $zy\rH+P~ŊWG,Oc-VT&4Zazr:nBTX$4HL4,cg7#]T(-5Sےc4HE 4m kfhΦcns XJ..<ښ)OS(bl(+K,$8lcrщútƮ*50d"s~E4UYd~LG_(`숂>28zm VoIm3ex[H+}&UiNg *Նb0YVҊ[K 3h?ZԪm1 5]J=P#f{ ,s9@gL[A~ѐߒ s[M;&4,܋Վ9:T1AN!fɱi2KJ-kVrC,{Vb>GpLoZQȮ.k.1 &ۦEh.#x]4iuQXB/십I!OƄ ,CCzDEFCXX;G_T]xۨdz@j? dã]RhYlQg?k ˉ@F`ԉ.,%I+%%(1捔AܡpZ{۽#|h9h5=p(1,? *RYi[7!O[YG5o<;?P{A!JPwDI ٹM}(sz<7mdg"Y>*,]ZX/:>Vh cxO 5h+{K cMWꈽ~V^] MJՂn!;ÏWf4ȹ΃ҐӥAͳbF] =񨮕>7 W(^!$jb߻a8 )7ph 'k4oεc6~$]0.`(O9G"/q$gUg߁n}r|$0\6vo+S$U3pME\߉2<綢 y=t~;{t)@t#=]7AW>]qESCj7|h,z(|(~^%(]R} 7>;* 1NA,3PE爅 4E]~0䖄?1.XxsP>O"mS'cdx,B,w9!m&^JYMf D7h+cTzEUfV0m7 v>OP|4Lʻ즯ZGe]%A[ 7哔?Q^.M#}v:~k u֗U]\~ (n 'ӐȿY } k}N6c%o 3cfs0 Td3s(9CaNGraꤺ4Eb|0+1%57Q95yΩQAP?yrCI Ha^[c}=dhѲ=K' a3G`nMAuC|:S-`9{%\^*H|>UJ@]t`>﬎duI7뒬%)k$AɞUZ2c任B7ЧBkAf y4hk\D%aݬQ\,I Ow nJ{dO DDTcR3X T# @2p{dj$ASve(n+ƯS=n!|σn,胎 1z(JT][\[|2dnA34oPMg) .z7$x  $1`ce|1sazMM5Sh.\5ֿaOgdCW]2y7s8eY<_4W-P$8U\MF}˴5ӔuW=6m`hɒI%OV1-Zpvea._2_`q1yۣmFEŦgHVm6 n@5ʲ'A[-YՒ,lid[]zX<+RvLd"iE16Xp̫D Kr/wAAmŠu,,J$T" g,(ɼ )c7k72{K,ƪ}A6Hѥ(,(<*bZe0Ay6,ZVKV-mv(=꽊Bƨt0گo ֿ|&KD.|a߬.{[l$ Kw?M7.S2@FaU/|}2m]uΧ.-{ Clj}8iAW^1JIWmxa/w,"v#f]֒!r҂X2>sLxέqRI䜥:xeh˳A%I\cynL4d8PvJ9 PF?0 &Cez b)bk*Ty6Q^Sxk4IڔEɇ0  ȴ!bVn2eKgr*r?ICbȳ 9k."j2k!,r+%-&h:k6HVv|1nJ̎nۺv=/bZ8]߀ gmw*6;Zv"w ۙ) \KmMQr9h嘽^y?| œ֠ҬAFa9"Zjcă+0gF in>~tZ!(ȉW3S\j"o,O n A4tR=~^iOjS;ǿ~`s3{"+=,M="myZ^R#̧un|6s>T꧊!tz˅:vmqӑihL7r\ƏG~>"DcԉAj4"{ %5B _~^(KS<|iՋ9 /*|ڂuS톮xDrhWCj7`K7NsĠB.nۍ4jPKfz}4W:YgvH-j9+޾u{ƛ;J]yz}_\ogF緢c]:Y SVk9{Ym%zIhy'z z2m|VBUQM%hWM%h֚C%)9Z +A*ymN}uKX[iI^J K=-:-v9>nm:Qѹ\b'm:i'ssx"yP:<1PR3BP\*QZfS|_}vq1}u4ώV_jw|%1ʞSs*{Neϩ9=ʞS)-*{NʞSs*{NʞSs*{Neϩ9=<ʿR#J$ǣI,KrgV|rgRc>^br"VJUɾ*W%d_TJUɾ*W%d_Gŭ}M=U*1p}{YI*X%bTՊJ*6X .@ꕤT#%9Tـ -UڡJ;e,MA+ıi%;"Z|anԒjSHX_@$% $Jīu֋*Ա+L0[W>]wGoFI} (kTJҋƂMl 5KfnrmnQ:KYwބ^<2G_~&?q~]ތ./4=듕8+H#G" Q@nN-RE'MՖTжm[y5zwQz|JeIY/Pr6ťn#$??JJ!EErء=7y瑸ѷSXWO.gтݵ%S:C[kg33ʨçT%-J&Ay,x Lu g LS 3T9 WʇNv[N8H`BɀE\BxɚBE'dSGC2ADlcReZx%FƇSg YhXtJsoJd0q6/XX$wSXҠ˪@c hsJy"_s-39gW g~@Thgl7),,lFR%2 d*Y2.q*Bu$P0s$m@FA dc$0) 3eÁgC~8;N$oƬrK.Cr :Ml4UA$&**9/r9S)h'g8} QkVdϴYdg/ήK 0gaoGMՋ} 終]~8}\C/0:@e#Wv͖3s_sl2r=t]S ^FED]Ɛ$XɣV 2PpUI89E /冾7/vPT%,|p *!\&!YJr s9NvAksV2SZIi8҅fDE}3ꕙfe%9C8S5J $IX.OdF9,)wFVњa`g7ollxL90ՔgY #úrLyKYs4mkcPXUB s{y+V=z} { vԚ Cz Q ,`+Xֆ )T@0v9II ZzҦj0P}jc|)7!Ad߾ MqV73==؝5W>,FMZU %h@P.w҅r(껋-l@X0L]}cXq_]XvD>jܣӌ McŬÙjy@cDP.Hn\?0wgSqvӚ CeΔ^ Gx%R~LJq\)-2Qx h&!g AyfV9aY^ O ILI6Jv7\VFK:ڐ̦ҵY3x%h#ct)8)=SFz.D\!H\-|tŢ] (_OoלL\gdn&>+3GWtfJXO ^.b)hz\".j}bBLMk9ާ 6O-5a9jXRHu>z7+?eR58wBq0xjBrâ$P.& XN2*2Ḡ<&%$Zog&*C28P>8NA9-^1+`pa%a CƓ|b#KM/DUd < ˄#8! .{炴KYj70ȘwRB p2hQ&ϕN* aè[;G7&gKHă) i L By%TCbE$uFKN瞦xe& I&4Խjr WO\@ nF$" rLVecb47KA1߫VY vHa5o $ե8Tba!5j޵4%bsgqa5+aˬZs-\EU/` NnB]]v:y2._D/>a5jaC*BЗ!/mʀcM5HYyeK%um*50Y5 ᫕84,:wAc팪 B1nׂ̡.<+,2@1(6aqÌuj~%A2P DhAZe'u1C{@La-Bښ6 *54k?KSeY`p @SQrh-Ǡ++UfeyrHBsXd$!h͎K) K`-AWV"v8[,͉bD _gEȉ!%P?h&#V,B|\اʄqeqFiZǣLYV \Er ٜJRhUjUЋlD@e"RE]Gշ})@ŕ_gھF[^l6( ۬XU)WY(1TЪ O HVR s ;@k /リra2 ddZ %d##1 V[pmqT nDk,Lz+.-${љ*9'ך!í)[%7XP2 ^ !zCu&59|(E#X'FZ)+@J^ jm⏬ _V~7揋E^(,r5sa\BB(R)J|ḭ~at81Q+CX 6!u n"a$F|!O}U:1vLL#ނZ4S~4LHYP:SC0uz :o}H'_l{B3՗:_Js ij,tDG Η/ қrfvoܱ lv&Txv:oy:*kՙ1Ηnv 5ئ$O7Khj| 8m}oۍ޿<-8ί<:2ZvS}?- 5w\-?FnX93|>{//M?h[zvܵlZC{6zk_ĶqcttGiKX&(; Qӧ|Ov n:JQ\Z}Vno:gxim ZQ{gbcakr RKZZSLKWsyޏctlS]vm )gLw8 wfo~,ȯn$Ob;A:Mm"hAD&6Mm"hAD&6Mm"hAD&6Mm"hAD&6Mm"hAD&6Mm"hAڇKv qLmA5P|4mLh )"д#1uCLbS:!1uCLbS:!1uCLbS:!1uCLbS:!1uCLbS:!1uCLbS`:(Eh:M\ɎӤ}93uSRt_mbu~VV/uUt}95T:HsL;eZ\ 5t p%F*3R%WPw0t +rTd.2/d5Yea#{Ŷ$M`fC{(F(ƾ˚L\ ml`uS=bW8?:?6UztMUj\;Dxn3(q3MiLg8ę&4q3MiLg8ę&4q3MiLg8ę&4q3MiLg8ę&4q3MiLg8ę&4q3MiLg8Ӈ˙nEY{LilpB ;49Ӑt9P"!1uCLbS:!1uCLbS:!1uCLbS:!1uCLbS:!1uCLbS:!1uCLbSw[ߝu_&m+̰ @<]oct&=Hv9&bⰓ!A\u<!VO,D,5R';0qb FǗM91]CM RZݯ_#(܏oy m)# q0]j8j.̃钏zmgN^w~%lk1!(ۑ[dz55<*裺gI:vQ즭~$Wen3sí4S6jt֝bսtxn^lZbHiMbikutL52k[Ҽz>g=m^0MǟS^|Φ'%޳/CE|+F2ôf]č97|dn;ӧ5n榅#?M3~nG.jrs=k|8)enL2L23{5Yh]aQQ&f`Q]aҧzVH6rh;k6<)F7mg+fM SHj.iљ3/a,ޡ,Zx",G$Z>;L81w+_/Ml{\"ɸɪO-zZoyBMt~Gh_8?ųugXCB_/E_˕[:|xYD?3ށ 'F?]f1to|n9μ0/ =۸/,$ ~i kݼԇt ;[۩' !Jw N{etvgӵ]tV(V8ۿ[8/<|D5oJ*22.Zҭ՚6P7R۝a헭xαx/Hj +HVմAA")'YD墶$ۈq ug;IokXj\߮ʲ˔]!'0dq.?'m}0˻\yzR-bI gנfӂA{ҡCvp;{λT4L@]{D7؞3̐y2^TL;wxq*)zϓsvP*$۽SXT[o^"3TqUnU.:P]rm6DJ*tAd%8PQTW<"ۃ_s$_ǫ|}ZB͖y69_*92NⱵ/)?h-ejptf :RN홰!H.bHi)0$\N ]KL뫷3leu59*{4 g 8۳zҮqrWWtˏr(є(]67} _H3UQ?1Dcg7e˂vՁ?QB8~9o/.d,_6d#c׿T_z5U{PRh pQ{v-Y'od?&1 NwRͯ^_jϩ:1r`8_әr&>|~})x:}|Ȣvl'=d=׸j*2CKU&kмPTc6[K6d\]*\€YWq3B竽4nC?.nŦtlTnA?1Zc+ro!GHV/8Fz!֛r܎W*OXTTO%\./!cz8?l9qk_Eަy5k0]NJAi:2%EiawC3dBOcg7kMp-5@|K fDBCb"j^5G?n{/1ނst4JCu$T)$aİ xa`bp|mhRЖ|Kvr )( !2#*B!p4Jy=+E6!8S7詂AAJΔU(ӐRԭHx޹tRئloJ/3d<) _E+T}O.GW1sOL<9y2WY*n>*$h&"7ya_jC:h94 A}N4'%i,ˑ 嘀8-zT Pg)So>*f^L /M^6Q~w\d(iNH,YAJl׿QB ÒdeD+/ӡj]vN뾑oGd/c2Rc1f3Kg0>44+l;E531a+Gё8Q4ByQn`>2`eeta*̮*fwq΅t۱/t]z;Ti3̇"Ȳo'<۵0\'' 5xSl:" eT9njVa%`li׼&^-Tmlw Ebyn|LxP::L(UV4kI RV:Jd̈0G]FzХ%m. ϟ}#<GÇc: =* ztTVs/Tj(KGa=/~9ȭU;e'(JΉ&g<ށre `fch0J[Ha>y}"8OXI3O{V:h`Y;}gw:*7c3%V&TQ^5VN`as(u@}L휢D(K}JnSt\Iu󬒱e^R{X+9LӁʓ8}ak/SܩBgw6ŧ3x7R 6U;zA;'*>mWytv`j֛ ${t7g\rUg/,"ݟ^ߎe_n|fr~X\t# 0= *0)#X /XF^_Yc3`̾0mprfa7X;+n_t! ѦSv>oa:mNSiyNŧxĔPϕiaqԖQeHTr 42BpQw.; J9KY|Rwo:1 X̍#a0$SҀ!I邏i=Y̾ u=}sZG- j|ځ'%a:_b;U)m4HWK(Jc[ܦб/ypM52wyTR.Ut@Y24s<:sNx0:))CFH[l># sqBE@Tpd(M(l։ Y)#uVa0s dlg$c6U5)GZo&EjZV/Ŝ#  i vJI뵄@(g ժl_JE:^Fh]Q+>A"őTJ@9OO`@ov@7w4eN18 Í=[WrW+Uh{wiyeHGeJ''tOU6PrL#w@x{'AmܥSK sU|l8|>jix $=f:4SGtj&AL9f*V3?mRIy3RR;֩T*#G(TO8Bǔ(78B_#T {+O p7 +c-@@Ųgs3v2&$x4wt" 6E;| baiοL\;Z\h!ϔ:\(^^ xjB%S{&rn 2$}MTnr mb):n;]aPYe[NfCڃjл6_k#̆Zu@Hc)ɀPyr:]nY* xbDY#σ.gQpB!M*sz h*pz'5]@;%* EFb%BpM0aܣQrN$cF<2׃.8o{g7|v[ҕ|wAht 8Ze (YJ^ _Tv05pUmi\WC}vVigX#q9y:Y)+--jRFc" id6l潳>wJ;oNQf:1 BX̍#a0$S$6]q66COڡ'Q>(˹CDGIq8Q+Õ"EQ&{w=.b}b\AZJVmhp Ꮋ/sϛ'!7nz@21Rg3Ǻ{ޜA{F2f OijeJ(FEk[wXrؾs2,B 6H)%vj? E.TA*=nc/ W%"ʫvvs*P JMz%%`C ޜ/gMڣc YJJ8小`b4s>i?E#~E':>氏 S Q&hoX$5@2ssz~\KaR#CT(?)%@Q|U,[I ]UZGʼn ϶RB+&B0K]1Mθ]k*T9H'iRqBhrK : &&55AZԿPsɅckm#r*rNyT- 2$}J 8cId H$kUu+e}Mv;] \=+maSJ=dS֞g˰0Ig[D޴. hk)Nh0p ׌m8soK>X9x .Jd)nJiӀTq^Fx%oc%˽e~JEdJ9vj~o e!G9ѧ2xֶݠTq({R'‹Sm5WKϽj%W;j8g40hY5j@oga "̔R Z5|y)>41^&EjAx<^A,on (\%i@"ޠuc !I FHUQo4O?ijW_H}綝ѦK2nVy[,U<]ߟpVDHPDDAz]rH ,m*Q{GBM;  =ܻؕޕR1ޕHTGBJ nD!FTb!OT5DCl޸$B#8Q!kbR~yX]b!N`̡$sʵVxP$h#(OF{`2X$YBI@OhmH\&rcdYbN2GgJT \ZhTi ǧLFEGI2 \`d< BVPIU%c(Xe4gsVf[^q\$JlAHYbB) U V-8Y69{+Q@p ¢0Un&ZԢ EyZ9aM& Bd՘ u*YQ [HIz1F5\1-(H4StpRmBZ93)@B2_Ⅶ'CZ_mN:I&F#%KEf@H9GFkFZ¹LeceX1I CLHj 06׾9xXRnJkP6_qǫ>Z-Z*0o) I+9\ >ǜ^y`~rHGÒ~b[. l2LLRE˻lYzgO(hS<S΂u^JTH@Gd(D͊Z@GӱQIj"eQhRFY@EmɠDg]o-yp;ry˚6o;)R$L,7L)b)L[|_%IaL~RZ6A;׀Ć&D+zډ#GyQ;t.$IuT1drJ[%2zaq%h7ŕG@ ɤ{r{0L׸a֗|totE* z=t3)U3@JWq[~ߖ؞E?I}mNyuX]Zp^j_N2˘T1"KPnRȥf&IMɁ9E دM4x`FGY]CSu q&bJTuI0:04wL\TX6"Ʌ/3_Ac -!K2οVf Hi\5~lj;zT@p<&˼im MRDTս&K[jsV{ُ/$dQ~7+e+./u|57j₻?=8<>I?$Wk:UE}@~Gn3ltNϯl/Ʀ}as'XVyʍ $i6`| fI1*҂ZbH-eU* ߹'safzϹ ,Zb{7YG3xuR79!le˿dn*U0O'ku>`U8PlMe]˫w*tנ^#{9-6| %0.thsy_T;'[3W$4-f|16]~*9uzc|Gm;ۼgv <$?2}4uv/Gav=fryE-m/^;eғ:;쫛~M7ej@XԼ8Nx N F߁\޻u4-zhw3o(C=;hnv&\]7S" HE>Zw]z, ;T#M |lq:Gj+w@燊y/A\_9aR"[S,gbyMDTV $9QܲyQSϓ4!*D/Ly&'93H-ԹbJR[u-V+vREv7F'2%Fe0ZMGÂ"(׎*!Cv׉%o"J!QbbLҎϣZڽQVJ]N*,i*lLro|T/|VXwfXz;#IRz{oJfة )AFGOQQXL㹥ӫ~!:yC꺉k'v:B8 g}Or^k+1BrNi hmR46@js@]Sk 2YbVH CJ7VoT̹#-ALYDYir^l8{,ber) ՒUFKh8B67 ʬo8n.2Քa+[UpGkiG{dwo+) {ນ/z\׷CޔMq*{c,3yʨs%!e;^[̡fIw,ß^h7͵=W1zP#Ki&*¥<:xPZKO&9˴ST$ ȳK䡓\ /b ueZ<(W}O_Tpf93;u>)L0eŖ *$X)XdZX~x] ptY-[_u{8\h+~dzk`8ݭ[p6BW0]4|uoV=tY,/W7?+x3|H҂>|v"onjlxa#&T='dGYQ%I0LnhVvE\UhbG}S 0baejlաg.*Ք^ZZy5_/./2>\j=fY_=CV]~F4˱:R<|cxsnC*> Ec+DyQ«U/*4FfB+U1RNnFK-B|O#S}2y}9d2f3udra9C(1îqg|Vcs#|5U̡܍&]yԤ^ Q|~8=wNw<k$AfMp-@˧&wش A1"pd,1.XeVEa2ZՂ,\F606 fiD:MlƸ14Ld~" p7Ә%:"/MR p90WtcReǂ3LS&$Y dLTEi)%' Z4M0Kx,!8e> q  >Xks7HG y1kUVv?t|P֠n-U\;q9jw.8GZ&~9-Wi6iwFW2\!m+T{\qÌ&7)\.*vBJm\uPQ sFW; "K{P J\uP1 "!Uk]+i,!rUJYKr8#WR;2vBʊ;rbr5RM 50|<确L@#(B` (yc-Ã<]Sy ߏKEPlVcsTʘ,4S055(=lY]9@l:w| |f4^Ӡݰ."h^WQ2< 6ր]t06 ;mFo>1.M) 5Ny @ȕ).@g~Z*oE(o-j*hyu[ C1fdyܨF)[=#Rk#e&Y);5~PJ?KoauCEG{-Y^+8u)*;ی+se}ZjFiMWƜ|$كY4Ne^ ""l-"L OqNQ%z2>'9)N,86$V${V6(NU.xyaV@߱P(R3> |/~Z$r]__iޥw`tYo9 %қss.)3xe6dFEYGIh"~Fe@_KRM cmM( T C xVgCo^Ώ>+.yx8|L2;"׼4Ҝuh Q9X!8VkYZK[mfXuDZ+k-B\L&o%%t+VO e\#ij{jD+9\5]rr魮Dm0m0x?\ PDB4R3nRΒ$cυ füzS~4óÇ8AܧScB %ls*55/ @a-u}VroN{>{emӲ6W8wf:EL H"xЫy|4 g`Um{`}b3S15ܕ#RRbqb[N+ĕBڗ>\!rEhBcPrBڗN \!e^:(WrΥCr "V5rRx\)*tiJQEuF+_BJgW]+M!`3rڙv+x\F:$Wl)wFgi "^(WT V(l2J_E2˴el8~vwYOdv+OɩFThuBE0(D4B rY sic;ۀK ueEZm+myM>a;Mf'8%jFiZr)TS P*\!m+x\1$wIX[g qO}a@#ZED )rA\4V8#W++rrVx\ nt [wA5gii\!`J ֆ̀vF92vBJ\uP=㴯\!eJb\+r¯]uQRԎ;#WtERr媃re0?K S2\!dm+T~vIڕV"[s]CV_k0ZqGdܹ!YenYK­5ZT+|fʝ +=e$QV֟ AJnj|QOV =U3\v`fDы(-mYh Wծ+R8#W+r]J/W+F A3%~2 k E_5FY:O QtVwo%;}z4{oހ@XM @1= #Ǚa1QmVqm"ELZFJH nPkj6z5% Gq ]etN >e8 m4SUq1o8ɲ@5omxiYO"o Xx#*QX(d !&V \]y{4EErU:3#΂趏Hə;8sZp+}#`ɴ3r8yi+VrAęvHXH\!1JF.WH)ʕzhe􌉭ܝ's7TysoZ^C_?eo2:e/蹑AU  {f9iʃ_nJW[Yfup5%?[?;P_t0/r*-qyk>E6${ė.`i[ļa;ȟW{c?qR?h?mptjQ5 &"/Wo,ޙ)M M`Rwr]+u.jI"˃~rɫIqhʻkݴ.}4,qrzt_.Մm龒@c7P?եW{&FC$mwW8DžZ}GJ˽A] n+aa4@#E?(y9>5C;t>vc㗹ۮQwl]HP`u^:X VbMFȭE~cs7nGX3\L8Ҫ1!a]\ZW\.?u~fQkZU/Wݑ+{y iip$v+֊ؽuh`9䀸2"emmR3Ym)qG;n,ɹ^\ɚ}b&Z ԹsZJNth#J&Z6 Jz,/CrYg qqEVRV6y\1+k+UBZ#.W@irAVsa+>JW\!-m+l=^"W\]!vg q uE֪Pr.ʕ.[w p5qf iYڑRP/W+Ma.-8++r+M`~Nʕ| 'i1׬f.:Wֆk^OgRJB:ؚdeͰ]F `k@Ƕ.>;m3IMؼv|usk@i+aB.OCV0>̹嬕yy~ln|湦}hԺ!PwCJ=.N8cR uF3\!K'_p/W+$!`Aܙ]!.sf]in\! ]]+Ɍ!cx+ug]im\!6^:(WSmCrJg ii\!^:(W m\z`K B\\+z@JM\uP ג* pF8~ )NʕFgd!-9̡{#7_+ġCk+-;@*yQhhk^`@yp)9\5'e6u^v6Ցjft3"w FOnF/{J;z/_>"<?N·chƟUg~kq4ha9wHhXP".׮-jvEJ녶B˄mC|v3`m+ĵ mx:\q9 "WH{[QJʽ\uPu i+V:#W3 Ϯm\E$WsFWRW iOux#JMʕj qG5t/BSg7l7/WG+m5!B`C+ĵZZ?BJw(WFiQ!]3;/mQTݷG5jFCDӨJ$h˯kb .-֝ 7^K%.!(ˈ3+Ht}~I(M~ rXSG/6=uBfD Qb%W\/W9qH2\!m+4UQ"tHz%WtEVR?{Ǒ] aCx^ " dPadHJkesImđziA[Sg:+6DWv̠MWѕ (xӡ+{&7CWx+t%h+AfIҕ)X] ഝ+BWC+A[j@f3t%p9om[f2,۔LƆn5͘Gv́͘AлLm?6?p3Cd) Ǿlp?3d{%HѺJw`]e/zK?̛+к %S+dž|]0?z1:]7tX6DWlm ] \[+Aӕ<ۡ)]&t|^֢+VJf:txhwVMs&c6DW8ngJ]-jf)Un[t\V hV Q Uކ-]31mf]o J3ODWv̠ΙAA^]e3OBr+ۅ9&93|7\L2z-޶dIzMiM'w=ZrLJV(Lmƣ\lxEPn< M=gJc՞nnټ'J:,bpy՗L0BvCt?(`OA{OW2(]= S ϊsށU*9yηG;>9#ݸYph׻媽}oۢ^\ONk]О͓YRY?ʗ{O(OYIg9X{D__?Y+Wɘ+E9]^]X߻g>cc؇9~f5-a|ƽz/Tv+#PQǺ|@yaYXY8 /瑇)?3~$CƁS۟)=E|rx3ly._R1)!؞86L̍ҰۜF}39Ms5wwy]Br9&[}ˎLˈ=K6yLߛRuSLZM9un lh/_/!E]\ n0PVg>~|ݮVVj ӄ0/xz(d]qɰurģ'cj5b>Cӵl3{4}x`v>!`"VzkѷBbVy6\ՙHg-0Ú1IԿgj4DG}\\c4f9wU;KPDQGGmt,k/48DcbR2KvwD*{w ႎEQfpɽk Dju^2:3/RvƬBC_ih5( D9}=ڊXNokG8.aɆUG4so0ݕS@9yQ֞L7XVL]2 R^lDCM&ʐLhR)MMɄG2.-Fv.M1p]ua4ǐ^ix:䰴ji-NdTD1SoP:oX [\agܚ`v,~6 Ѵr @VKl l@RvqE4 ja3\,th,!:fNcOFu2\ [nQ.ƭ*ْփJCupB^+5ԡbkGU/%Vü.Sg@k(Y[|i`1c=`-cUꌺЬ +A!: 40 u4ђl<*r@WY +fHD [r#Q2̃g$xbGZ4tn.af:"Omt3ص7Wb'[1r~|9.=AT/}MR,Zt ,g":ΑQ)'Q'TzFhs6aG~9Ճg< &jkY?.phf2&, ƕsvpHy3kC9%'j=,I+GzL~&>5e2#BSp d+ŤU`&2 8cXdf+['Gp*L/! od$UK80jLlba,jfP2U9(Lt,/ZZK.35^cleTzoAxqHe`n,%N. 4f!` bۊh @,g .g͚`9Cg?'^~"z<<PRKXPYΠ b[k2M^Uc6]B Pla:rJˠNZ'Az X>U j4*4Gv&xzp;WJ}C`je`v%DiفĤCkBm-7ö1c)2_ֵ ;L]3`XMM%nPw7ehş(8,B@ BIa&EvgCUDMQۭv]ŐɶgTb6JO?EP8uGa+ÌG /SEq&Uz̊Hì`~X'De^G~M6U cΞGF70îNٳ:,,\&5 FdtZW5`Ɋ[l,lP栬;z%20h @HWQɨcTc(Ȅ&s͵ rܱrjVh,/w vޡÚ Pv]@bnN~lGldђDe@O9ϥcR ŦAu;*xtVXt03H4 g|<V Nnf%$F ê@# VCQU.A5?9,gQz^7WuUZbcTVQ0 xLbaŦ0tmXK1ѪLb.1,k@*hLmJ5H:)dޗQBk{@4ag\Kr>Cl e՛ T+E˲ B_I!0c ]ͅL iBhЖ \nG+Vщ=I)dPB4JF1b;KiY7vU60&t0UX$0]DO& Z/ŸZq[eWcw}'5pk]^!'4ÎrP9Hbp|s2~z]`\8M謵7h/nnD.ۨ~89gӫw"/JoygWxWWW/?GK~fgbz.r]ٛ?Xs)ԏ!Ѯ /|s9W@l))o)d7'K{I'IQѤ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMnR:-%u8v:)Y7Z>{'M<ŤN&kRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4sI7`ș$un[ImtIAIN:O0 CҤ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRp:fK췓\ϛImʘ5:.`5IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:Ѥ&u4IMhRG:9ί/}'7ߟʥrח^8?Bj˛W4\26m)XR%,6,q0Er`Sbx_L=:syq[Rgo>{ǁ۝ӱr,RJ}9g۸"2X,r{_[|JdHNҋ{83zZ#6emi( ؄D{#%! 4`$$2[gf48e9fhX>Ak,FhHcIê9'zף: cE\F()C``(s!ę&0>#I`X `^!KOB W=O#iHXҚveoǭ1Gqzu5({8D&,׮H|V`jpʒ m,k`_ i:qq42F *]L]!0PC/z,Ms.~CvV@(-y#$Zś_:Tͩ[ k9~ 63d`aꈯ7]1c3Zٚ1C4(شS"mJ<Ȓ Xu";$B<ì [ԡFfi,NeEa[ɛ 1+x^kf̰{bjiע_V?N+ ӶN淭F+n{7&]/%.jZ<-Gg,h}yqȁKˮk;?`J Ä G E!FaQ$Y+ 2$^_J=ėb<_* 酱j5I]u,8$: MLz=8qG,|JreyǡS@4]~]s!S@N3@דjO+k["[B[:jZvo)[VdN))"24(PkhvQ 㟓ROXC 9M"g;4<`Ngkn7ñ]׳ͽ%kTk'вmeQy?lcњ[jeiNa=A8jky|_2vE3Idh!^:l1GY\F^z_zR53N(.~Pv~‡OǕɃKQJĥy%Q1: #>|Dvc8{vؑRd=ǟv|\k_ w`No݌pӽzO3~tyDS>IwMç%v]Ziauv3u ?B88h8̠]ПCM3`6^}ngi<qUB*O4(!)C49U7I{ .XP]vA>ui.=次QӚ'~>[Dn.WfF+8gӯa#!r`5_|"]I3n.wD,AϏL%@|דY'Z琶׎rh_ 8>stĀ}togFк~5w9tlϏnhBK-Q;[vyp=?ɞW<9r't過7$اݵ_4=֋Ly9-X>S3^̂=QLo?v`o6ijV믁|} 1򃼾-`3T {ֺI 9"zoNE5M9ݗτa9AzREPAQuu;޵ylE85l|Ϯ6kD9g}wy%^=UvYQiTSWڋm_4V_@iō@XB^BB+BX5 $ +@X}b2~y,.ø3Pa:\ \yT4s*y>))ax/(Ӭh|jISs[fX.z8OнXii1;MWaOr5sQ;ڗHx  ^;oIJ`sq 㤗V5Iୈ9g_ %3Lz,t諢@x,qG $jgFfd"˒Dhme` T@c NP*U8GU NM#/5O4}aH1WFe;bb1KM;(8g#`ǐ3bHK! '6(ݤs* N{!5;eUĆ,(C@VDŽ)bv Bdo1;GhEΉg'$Yf,D0/W ;2%{`ogۋ͓[,dzH>}-ٓl淝!K'uTEkOd#1U ):g wamr"]iA4bC첸e1+6 q0ĝ>9@"ROkA;0\\EvОSE!$ʁR!8F|^#xXWy;&nr8Av NW^S&pL}wk̸!A3a I mO9:s!V_d?q&h?,KHs$8ش|JO|6=}Fӕz<뫧nj #R`xZE$@\ZMyX]8Nzם}9%7?KIbGz l1狚j uExܞXĔH .C&12.y˰3 w=Vږ fJkoNWi 6W3_G׋d^-N^3U0v +|v21%5FquUWޭ)rYfĒ#8KcYnױY'@H@ІZ@<c @1L;i6J!08CcIJD'V[K*7}ܞ %XNy\~gMM苝)vM9g)^!@׆z(7 SsKH/MD$'}oH5CݎuaMx`6Nӥ;6v7=3oL㒯cֈG?!d7XA"gl@Ia2*oΣ:ݐNAh&>b$ޗG-,ή`TnVAi٬[%<ֶ]L6] *?1mޛ{gӗ!޸m]¬aY7\(. '*5ą3{(LiA,كBPZd#^A~sY.܏*av@Zb7Y%Yy]PNJ7r߽f^1{Mw§ 12jPA8P'?@jEFEDo N{:y |z 6Τ08KPcJ[F.鋍CMcq9q:MڃX> n)fzqPԯ`jsFe ,/)%S2P2AJ[p X۰sp0ે,Y>-ݙV#3]mJ;ƳMVO+ʫi3j`(xJK+o^pAݱ}Drd*l>T-zZ$y0b&c;c#G- ʽ?}ztKtvi8I| Nܟ5 rαɮF$f[S.#X'V;ت?uS'ٮtZUSwi|_iC~}K5Š "BA^ 70v_JKo[s%2P@qw!\ +p8 KO [~>vjY|7ge9=R گ Ubq*<&3L3$-Q&gBOq-LJb.t正y. NžV<,qh>,T:J-.WFhb9LYA":޸uV - =}qXS"N[l=PSSv<]Ϗ*hr93Is |U6dJDAYQ)ܛ7h뷲uҵtZ6ξfjO̓_(sh~o<0?ck3{U,7G֝+zW.] _r_Mٟ@s_G!PQ(p59 >ԉs?'㟮SU`5X7! =k2/8?~[;=|^+8AZ(`[ =euDQmK͙9W_up>F]sȊTǫAޜ\b{X%4Eϗ77V| φ >_­"DрѼ_GշϪ+?mXӉ>-qr=kl<r҉j7y+j{X G^7a OzApzDp 6\ \&rk"((\V;8F\aa!láUnWZ:\V;+}DpBs< K<ʭ*R#\)vu=bl =:{GZWb8#Nea?NM5n:XʔL9zmc`kآ/ͺƢ{XEVZae jLǼ7^\!ot8oe2hJ/M:!$Tg [M뷸D͍ɦe ϾE!fyiR%OCB6.9--9KNOl^͓ڲ][sF6\{v6+J6NKed2FyQtS"Y,!P]I$]:VY-sy㇖ j[⦧f<$&SxexN"D#Qj$}'$e-@h_$jT ]uĿD g7U&$h'B/5R2k-cM5Լ(h$WT@\X\x$ G-*\ !c}@(7 @$"nEu˂,hR>%M%ӆb-8cIf MP )%-*-HXHA;խ-{Mi8*WÕ~|s#Ems^k7[r1;Yio.'m$E)nZ)Ҥt^Rz 7 vi'GvOt=NzA ;";ֶֻ_K_уE0dU7.k!lC)-yN,%ZtzMŞo`܂hu:V .5j/YrR;vbH-$&zw/0^6MUg͖@͎{E7rkCʎ>kGlS{kte]z{8l>UXĒ@kWmxq/4Q⃑ G'+7U :I(Z=:_@2"S;E (kXɵT.sIA9ըwV#Z6 3aPE]fLS_'psxFlO iAHLIMH^) 4sqgB)mZ0$jDϨiMgG߻^:FWV (j6XQL\q4QrkQr+u=F2p-٬;!)X ꒀ.PJN4ɐx]`e!On)ϓ qB69TdrJ/4 *(IG^0,Dvh.2qO5Ȳ9ŔpFz.q \YL Vi?f-Om9;$FXnFnu.CtNKӡ]1t)A79(Ӄ+Hv@<8o``JsmIR&Řs6 2!9#D9q@*pC=%-D ):JI$ H2cI* <-t@4:ɘj39ћU^bI1!ĝH#)!,15-Uk*p0"P$CJ؇ۄ*.D"kdp3wIIH¨e%&(0AR.j 6aÐWĐ09Ā0ˬȃW`E@_X)#1 XQYhlK#YL>N7oeA7DRIUb lZՂ2 D3E@6ZBZNk!8(gY h=/nY Ik!5?Hc >IY0KZwBEvОSE!$ʁLg9Fb/~,i>2ל%{ٽT40=°>&XEK΋W Ec ̃ұ`҂a!HE39D/$i9LUa,6%3V'ebb&hπ*lY s@MCQ RK2L4`F |:}P1 c<)&;I4MWv.{#X,`і}ycW"x9kzxW_uU޸-rN*cr}N:KeQ0#HPR8gs4FFj !`Yr3RQwe ~/ mQ-J/N0 z ДD4Y}SM8 NlɪKEjp4Anѥ-Lʰ"ŧc s űWLx \2 IaVZseCN&&[l=#lmkVb=B /&Q6@;TFcv˫r' HߟJ$ZD)+ nEVI!Ĝ M|15sWErk%Zhb5uVx8[2LArMyDIbLXBҮwz_ZVockUiAYMZffVqsMVIygW\ʪV|(Gv}&DHy0ڗxЋSWtD")3(Dqi=r6Yau\PK8i[_s t`$2CEMÆ:ʢ nO,bJU! c[lEeeK;^aUnyLyMZ*Y'9GKI?Dz7- E 2L Dq&jȍGK+,|)gu^ǖfn˙%%xˎ<c @1L;i6J0&ƉMR":Xҋn* "\޳M>f I5e)3MyBREN f 9%&"D΃ymݶ{|fN{l>:]*"|tͩ t=)˔(^&Z(~5EA-%4EA%|>d- W.nC?YP4.*4 B0'I"-@Q[$HܚGyc)kK!˪T8:4/}^Ngx &NnLSȥfƎJ:7&D<%,>E~؜ \ȼ|B8ao|>ꕲXLq1ze%V*q(qI%@ ؙ0ݩ /QZ$.g.A ћ=!2οж4ߥ<-KVS{߾c1Y 2VPePRXʴܪ]Y;=dZxTc?oC_S?*n[g}.dq*>[ F~}OmOc2ƻOM/J],sT,k՛%5QcUr%)-6GPJx&`modUA7W5xݡ&߮[V-h6Y{h0SxO('vCzy>zYV3hfbN1W,VCĊ*őbcR,//L "UbKN]9,4ܣVunz(G[;'[qHNoP W[.y4tʤ z=tJČ@ o܅zXУz9YMԽЬ?r6TݧZ֯uwD}խ,{͠MMy;树DLE^{yÃq{NYXN`'-z-gngҨu+5jw'|@/awǿlwT: ΄5a.:L%XF!XJGtsl2wgLJ==X.Qf),^Qvk㋈*mlCM;SYCil _1LrH6~}2FhN5PZ Y7 ,)$MU>"E9\Ӻ9pQ g0J}Vgp$}i>2QWu2pBZ"5R1QIR)I4Feij$$I &eY{Xwg E˛",RT FYX&եw:biDSb=s0֏Y7`a2VAfР0cAKdkNzhV),..bE0q_5yI۫<T힃 wFι)~Ey5ak 㲼>u%eSB:z-0ԹE<`YrPCtFb?< x9,d3##9wD0!b0쀉S*Z ^rqrf(07Eɹ\&Sъ9\` u.Ș2,x-iZΎuU &8žB$bc Y_P8M#->[ݛ, KKw@AmxǻE 7U>Zbq mc?S99:B;c\0Zu/b8䢨%z'{k2:,VPx{0EhSc JR{b^eV'BϕbIRDd1cBx̷bYtn_Ĭ'gq8992<-Mw§S I  @c!pQ+ANyZ9 ŋ}}H=iύbTK|X3xrk JHtj2bQY,zͮFw75p|L3kǷy+fv}]#-kF^? ԕAUG2SL ޙ!xO~`x?3tI~*QvfIYw80ml`B'7^ V<%;YƠ٘U(w}=U_B3j:witp=be@1Щԩszo>Q=yql qW\k/'놿O>e0G_:q.l_xJѨ{@'Q@5{:uO :/Y_6` ~}~d"J*Nyp٫΂~SMY1bUaG|eb_wxkɔNo0xp ;u 5_:l-SӡKe`[`yytУUi5!_:,_;~/+S 63{hv~|[8 \o(oMص8[r=_ ]@=pS/z`-ZLVcoJL|K0k2ݝka bU,^-ujeq]8<N3rCE4wGaS?QMYQ]ϼ% sm;N^ Q0RQ~R\uȔ<7Uw[SOmZ0d(,..k'ʬvtWX zZo{ 8. "rm"0*(ӚDEy"pZ8ѠR_Q΂c;. BZK~p N"AЈ#O1q:Jcc)H4ڀysBh.6H .&m}`zt`w2ee ^M`мtPt`07}]Hoۻi1EN9֛( 0]֖D[þs)yʯ.$H_&hykkf u XM^3,@&<\Sn`ڂIJ4\cV,Q:r2C_}s*7&|uBx˧Ngk:'[{C}b_5|kwT73ZR},Ka_.-ZOp0"4KG}g!ۋ_A`@٧L(:ORd$-<\N"TF4ѝ؎ ho$D:5yaf1 vv V|sΙ2S-u:3kUabNG*ٙWjt: ֹ!T27${om,`ZV`Q%:||iKWlǴzSS0.G??Ȇ}c{Ua؇M?Br|~/`_$Ƅ9J0HoTRnt>0[0Cx3 sgA;fxA;Uz][o[9+v>mū`&`v`z4x6$9-]|eI1pΡ:7 ЏnzbbTE69\tGYX%P%Vه~{6hwaTBj[Z h QcDV6zDrǸ=i:t;ұPa|NB(PAHAZ1+D,$i)KE˗1 R@Y`5tlhӴiMviGa;PM&=DmDdygOo#VB]? y3BT]S_ 68nKB $P;ԿgerT N0_iߜL &oNӕXxm%0r"l vKwoVnkn'V/yLjK%ʷ BEHg\lr|.ǫt,?eB}XtVIiNV χ١ ݻM碥` Fu6x?M&M|qp5;?r43_Kщ~5 n@m=4߮nsQW_ 6<ՋBϏH`)UWc"WEJk\BTt e,uo}6 vT3: B4SjcQFc_^:DQq[8ܹL=˨ܖlb1c@ @Arb! !}n~a$Wra`T@s˒a`Uqf sNDZحp-:--4)W@o AUYr{?޾9 ы:畠I}OfSq';oNձ$AyDr`" ʫdv*!*Ǚ8+,:yzӼ7l!F|fZj/~fFx=;ɭ;{nE8FBeљR B-d4xFzӒ)JsCdHV(AD%0A!;&E6C(Ȁu!|\QiuIY߽û:/Mu4;Bl5_RKj|I/%}'O@"_Rgm|I$(Y)k-朽JLbL(0h-d.B@=r9vA5q6<:s( p-wz라2:籣ŹlMoof /Ȱ8")=5EF"lr;ŕ瘲!AD%A*`dU|ɊInD%u<їlr 3$p2 |o*K5q/F.ceiPӠfoP%Ȉq 46+.(da!3VK㥔1נfP3̱}e\4i䋺nE%G W8 !PMrZ+ONFwȔu⺕]R4c$aҋ̽{ NlGSiUDCO'op-Iѧ`n;Kvy$$0M!/r8"-< }NeFBHM!S跦݀>oC){'ws3fl1/ml'9$՝4xcgiql'2#Й1̗Uny6{90a~1dXCHk Y$n#z՘﵄ܐ{ ݵZcάox9!zLt!QBgN`P&x)y{!w%چ*8y!,]Q) ɋ/Y钾銿y2n\)*~}\Jwr=_z-%H { uG/Ip^t:oڞ8GZvryQϷw>=X۠;-WχՊfwW<>&s~#z{9|M/Wݤ/?x]9{YMȖo<"C~_;cq%g ) :*R-yv1V &PBE4*e{ţÅ̃I5r1"!^$;BRu:dh0RF6 #L mCf%,QlTJ*OZ#b0d1]E9N ,#M8x)cj;GdzJeE<deLk 4-Uw7˓[Fl{=4[8[wAD]u7KS,گ9iAňBd;0*@J&ču <g/:4H O sW*FmY ;BT$s(4U,cZZȤ2C1(!8ʄD=k~&s1BVَk7bE<횚t}O{7yK,0/ ťM)ۢ/ݴ%+E3Oft:@wNFYLS0x0;a9,Y^_I*qs.ƞS# E2K6JyH@i"VUmq3ˎ2a,9a"X1$8F323¡@*YoNj-d\ǝD'+|~}G/P>H:Y"״f=BS0'y꾦A%eDx1+p'5 sIVZs,%/o[\݂vc*JꆒJ@* "K-G2309rƭt.+87`%L# r)]fF2o;[.8N :͙󮨪%sdJH.D!.!YIÒJ87iXkJڃ8;ͽ2E*2:// @ùeX+OG7l= lm56]/Ksr8jH;Z|Brng5(眥/t&UHO\SY:Ё%m̥GS9"53U{+WтO=0ABK;1q^dAy mY@D{ϡz̡묵3pc&h^~[@s9!XOlZr5+ٷ\}geπ&SzG? sN`kv.+?4`5~xv;i߭3f;)Aݺ*yLjK%ʷ  _37ٛL'g䇩yEzÊ.KLKa؆tU(MۑN1 χJnB)>R7B=t~6 f/jvwpߦόo 6 `BzCZ m-4߮nsQK9l`jP?;RqugNjN.uQ1Fbrŧ_uoU{B9Z:}GE2%K޼m*VxOteOf{#sf+l`NY' d&:eRa3<9Dٜ:rҗ=VN[>+xc|MU}b,ݯB{1w&Se8+|s[bV:r.Gxŋy؅Q蘑5vJߌ/}673kGl+g&)QwO^_Y**B1mQ[A{YZ9*GjuhT:Y#׹FiG*m~PV4]|I6SLrR}wQ,Ŗ)٥ZUL{|7HBT<,wٻ綍$@CUZ;7I.rSE AvR߯HP"-C׍nemK/tqlW`!tvn_ytǗ/8U-hϫR8-BÒXHd02d> Y"Htt5󔦕 ~: /.6 \<BiҬw}Tۂߴ&L[\@4I(= 'U@ޑ!PH>`֊Ԓ8iYǵPc0;ο vrH{f%JP+ q:׆`rϣD{D/^f웡/ʽH4=2٫Wٽ/ӏjØg6+?>*_;gq .k˶`q">]'J7݌&/c2g~ʻCzO>+ҽc[n&+S^홄z}EvXð|c7Ӊ {w-3{&JwrDuQwS쑛wLI#XLTY"m}M-6vfhtk[UsnxP@PTٽq؟8i!6r7z@C@ߑg)gE,޲kpR+fQV޴soY/1EfvOl O|+҈h|4^Fه㊲N4>GR?ӻK_x*ҷv׸+0N56PZL5Ew>4(sș17҅\A6^,foIԛ>Qd-iY}d-v2J\nTZ8tp/5QP$!! ŽF.p',SKD'ʅ}fr6MF˱7S جF꫽wzN [}GtSƈhKzmi@7ɛNE #VlNxn-rКm9GA场xW8mz3~ LOBI)jwؒ7)08x;j0ew8h΋uu 3<“G̏ۚbNR#´=V*(cИ9=cBqieئR#[0!=bq 6VeڨRS IH5r X4nb 19PDLs|Kˏw{XBksOk$%V-@o[gngrrt>ƁXݬ)+'9GOPVkYGz/C4ȧ. 8)osYGCB3]X:q1&FI(d$Zr}t+d],#/sأ6 bVړ"">AΚhMײOa@ B,A `$VpLj a1 uC_[";iǴD 成 "}#HQYjL<@eaADB?Eڢmjso5];#q*:t~xؔvt׮Bfż,oS޿,!cNWMogN8~nj)Go-w_qWLd'#)H|אs~~SZ4ܻ*u|; bʧµisXwyyT{@ s;Iv_R?mvg=_Ë_zey; Ҽ]!3BK{WS pdBӫ$Й_eQ t_? ,u<.b29?n~{.вe1K| nqכ,Ta'us`z] Ѭ2qzV硥dT˔&jf5٠~9?{^q}Tz*f¯tgˡq! mqƗW#?X$ؐF-vcXa]bc_r=WS~u. GH6Fz 3֫Nu3Od!WhD??M3k3ZUI]|N)=%g_U1DG!10sG8σd9y Mof_O`s[eKݺg^Ȝ==v"s3oI[lLXACeV?2\u_<̛*Z[59 !.MBʊzuCS^}_E =ZJhS:P( !#בPpE _"w3Xeg".Xԁ^1[ye-{K KҎ( !2#*B!H4J{=+E6.<ݚos%|9 ,kR睧.{yR74Iu4wJ*f' n0AS&'տ'4 |wSxc%Mۍ`N^yn*;Q+/ݨS-ԺL`d680yUڕi2/~O'ԉE _$ FS*v\z|r] [D"Z%gjc4yfUj7?пLjr5.oFâsx@*8E+K2i7N33@ԛ0:cVϠLL< vM{ď9@e})*ɭ*g )X"N}ziLˎܬO ֝6g,=V `^(Qe63|S`rE#W[UWksr/Y\ܟ4 wՖbCV0WF0.`}ܬ,uEaҏ|,Vm?h\6+ɾؗG2W#_3xVjK6ڹ7o[ 5ճz7$ena}e (:Qw8"s=sGcZIءD%GzJ )toG~uU˞v𩁶$GJ?FdEGf ]_$0Ež'!0E~Wݯ{3h oSjmR9d,`7YyݻOur")rVPfo~ "nI9Q fJb@,Zu˹rn0Yu v15ZQv>­U;ew2I~%vղ51F%Lx49 E2 xYAxpcQ`(\q4>JVCQJ5U|h) UGdt';K䱘D-?JR\}1抬)D{vs#K3ٌZJ\mF%;VdsE:scE%Gd`јD.QbJT0:srQ\!uD*̏\%r;vUR\@sE ^l*E:¥dW?|߾>s~ /;ť'i'TH<93Tnu9BbkXfs/k32pIQuH9LP\P%Σ(DpL2 D'ofT%nKgϖ~˻Kc^*`(N; aiLݩp1rfTx## QrRpUz+Z.M4fHRgg >V /UBKC4RNVGI뱰*s$@؂abh QDJ`h0pxI e؅sc/ʳ_XR?liH :G?C fSay,.;łUV1m-Ak9%I1  š8%%>Y"Xt\{wwtC{A't 6a\%+ WO|?QK塛D%/\q .dp@|'I OݦΠNHu|,5k*TTMIԐySՖN~.zކ<œ=Ns MQZZ߸ܚhr~)C`Ǟu}r8X:/DEwYPsuOvYwu-/(",[sw{]P r- ['ku]%~lh*d56WOixCԖch /W[S;K4m 2P*Z05.\ٽ/ӏ`<+۬LLt5[VcS; !aI$2BUPF)HrDH$2Zj,BqU]kj`m׉{vyaɀgJw9g.)C@cӞu1^yiQ$CݚFVOx(QR [;"(#MmN6餬ӻU{mїWrk{M;{ ftiҍdhBQT vt9,JűA-[LHt|K \ &op}'WP ҂IP#xW lTf<;َ.O. Rl %#ͺxfj≉bnji^^L5[zWՄIt[MA(N6$mqBn¸rVL-[G\%cHJ,it?[_KU|3::rN G%>7' z-u.9H`0U$^fV%ӛf(=s?RoUcD*x Mq`Q{Y S'@'-:}U37s}YZʫBQxې%*cl\s,9&YSe%Wo;w+)¦@3ʍYO" poY5FB8(Pӈ oiT3wg7A` [ve8bUQ>ɆYyevPB &_^WNҹt4Oɡ(nho0J;y8ϥ3.dޝ1˿;emJ$%t|9x ݝٿ ;N|-C~w>;6k&6hSꂸ~!4 (sș172E,# ,J_:ޥwK5=GG{r`_ox5+t؞PVF껤(uY+L:xpC'!9a:sۤf)%N@zY"I v4r#(2eqLtYXܝ/ $&z^>m#6J+:>ޒs^<}bblԍq yh "HZ-Ol'ky<1Cgc C2}sV߉2t`;sι-޹{n kXfƵz@j1MSzRg6cc*%+vufrC5ز`[E(VwW88Ux{ Qvs2iRirP(̩@(. [g 1(8/C{VeڨRS IH༱hNK1U<Fc\Kz=|j{S߮T^v{xXS4 gr7E:bPe󩜺H>BYgx1"\&h1 `cS1AjG,hxP(4A1oT)S!`&r2F;&1Yu(ˀ(!GD^^rE0zt' z 8R L!a)`o<hdRW0ۖ<\/Zu: +MI;%6G4N AȢuPp֠*# &-/jr.j27av>Lq0>u`5dv28g`vb?I[4vN%HY?8uӋmi3]e*/Tq({ ٬;7O8*S10>޽OPS sm-GxlLUEeī.Q)Q*E:df_XJQZ\TAs<uOB{ Ѕ_?,RE)$~y)K6a ?\/rwz>|}-{7Sn?hs~]|>Y[_~aS'fLtFE,WYذ 4->L=J.aʡ *fzݽ˫bu)x6*\~⤛Oe'pܫ}}K/MMNبk?1~W2j滾A58 \J 2 >jL{!+̶Ӝ<"ZgnTolݡsJŸ. xqH:l`$q8ܭ/SWt2> A*SI$C" >WȒQ^~.燎]5#bceUK@):o̙H1PYl]RVDطC3Raz[k[ 1\y`JJ!dd:T_"İ@ xa5<+V^p %;9iGNTI@ ! Ev4J_ڔY)l H#~ oIr xL̵{Yyj:uSB8kܩ{:g0@Ǎ.f9)  :n*ɭ*g %_o,sE>6:ቿ,v#;:iy!tz ENQF 0;cL8"L{@y+xF; bY&ud懾9MBۤ 3 Yd_>^6rI~}1̇g׉W&U9\ܻjP:m_pH\p>!ue"s*׭-Z' tFaS_{^6M`YayV9" 9š&F wK&T*+ KHw4kI M-t+ɘ22$Vb}6we0ogi?~υw ({ L gweT*7 س޽1Rm*'hVh`=|h1zLDž^4k#~r~ `]9%GpibJ"ym?eUlP1F%Lx4U( *P^Ʒ^Y)9Pw^G٥K}٥T漚vaDkWt;.Ƽkl'a d}`UMQ Ӈ z6Π*fmiOΖSpַl.V<^wO@(AVrh/ kw00$_Zʳi)ݭf3ezh5[ y mUbjY~ӚFi銪w\Y)γxI.I"%w:sULDȂHs$^Te*`>by]/qS٥∫uG%>ŀw>OzQI]yE灂9S }}=BzN4 ;gYغɼӥ=]96bjMq:#^N:]Ӄ%V&T~#oDhGz"7XX~s0,u*L#5H|}a^SOu`_N ";ߩC[PhXw`[dܥV #8H2;um2*/Cv'pCO*EwG^Fy h 4L}Y| =Bhg&kͭ0`= bIE@ѓH i `m$!*BA7KapGrĂ;lЖ$4y;Pu@kpY-efpH"mktXotP|I^_Ɨlڣ\oo XH!LrςJ*>F8ô3[4>:=Wab g ;90 [;+OQz ,<mj jYyvv}{}<{s|GrGL \i/ nGmUV4K%܁H`!}$Wg+8T(tldVЍ\\4HKJ[̍v%GD)a8HHmǵb˾px_Mׄ,enh|_]WUVԔt/߿`,]cdm TItk}.Ed !jeRD|!@6q9"7A֯EԵ#uz6$MQ|YԶݏܸ;@_Ѹe۔Q"H98XT GVP!4r@'i-h+0Gg@T  (MGVK҄ȵND=lJ kcMPkNKc-") N949(abZYY2 s2,jj-R1)ES"[ܯ~X,ܿaĮ Cm@ x^&Ⱥu%XJ)qW2hu={7KU(r`8W`)}ޗx'9N,?/nM:lOx25++XN.m6ƓaoTKh *Ldx\eW<ݘuu0Nٿ^zheXڧ7RցAR r$Ϲ#J gvȵynƼ_`Nc0Ac2/ ^c%#gT%[J-v*b"7g;K 6ē\!գ۩qc 7|\.rZS~C΋!dUf)" 1iw{@]#Q#̚Ƴ ѪTq&(Dΐ!P΂^DJ >*:D]׀Pq:L _~x:3MlU3U;ȭMM;;|>UC32X}_n?F۫vT,Ϗn{o6BI%W{Xxw5?)}<܀uw<=>>c~4+V`x]SoZzGeN`\6: Bjty!n-<" dzvYL%iހZY<8iA1T*ԫP FC%z{U43V=5g|޷M6n5|.;<2S?\ކrvdB%øT q^JK `SaRRĖ6Y=S'N}vuY_,T_ƨ"pS\DlJX@šGĭa%`X:R6_G 󪶼B 'n/[AF$hT $ }R.`Sš漠HLSMdZHNR8yϝ*+KLq`JMJE˥3}kLHoxнCkv`k7>{?߹/0ezmQ ^o̽7ZA, ˨^*,N2Ccܤu@@#J$ gT,]@(8Xh%D|d53#3RZ21$0Z[XX,! JeQh8kVzM14C(90 FciB) U V 8u\g$/ c|fpxt0}[xIKH|PTx@J, 4PNrqt.* HIyrF i !|'`3Et9z7<8‚PUf&ZPQ=Y S\uS1SA]EΉg'$Yf,D0}b$`Efbo-Fh{;a$h٧=&YJIUb0)e lfƁ(mF-HW!2<~QU!vYܲ}8V !IR _Ir"؀Xz&2ǽ*2g2/W(KG=Ym6(6^0FXR2v/ҽWMsL>}cJE\/9-62FIdH p"aF9.s_0#9ELH25ȝ0rY|5g*+Ɗ$IZf(`>F+^11cs&Ei[i͕ 9:zAZsVWcOSnЄy;jHݡڼCh9(1K/ O< J_?\ʉRWH@ 1C9t\z5I(ׇ&,$;jY5nY05&QNV#A3a I@reqś51G7e a.A8v6-=]渕0;iE}iT/x(vKhR<ƣ/(Dzl¹pR!?:rLn|`$IbGEMÆ:ʢ qnO,bJS! c[lE2l;]j][URhw}_|+43G+D~\['+y1W&Ą pm?@L@+4 ?;\}JC?Kl`MO_k1'!㶶# =w`M$в-0yf.l,kȻ <#`N =^o[t]L=p?_9=͠3N Z0z|OKͧ?6s`Q "v_н`t<[`~,߃pPv}k1+tԻ`s:h追׫_Xxv{\=)r3=*%G+,S$r뼎U)^"N/\+:9y;%"QyͣNsPGP Žt6+;_x`$ui'"i4Z#jHhb行1$%Zʭ,FXsv;?r l\Pز1Cv6oD]?,RaUá53E5S|cAEo_ isK4$rcyk^!/pk:o/I|;N;̼w ҹ;b:H29VXk=rCỈ.⚁t?z O:m-۷^͡xOVgJ׬E* zt3)%\Z~w}M? pS!eQdN$Tъh$䋼zsTp&_5Z/磶Da)5(BS4'h쨤 XkyTYQڮu(Q2_\Omz,i}>8}ā+KGatZiؙ0FS!^DmD+>/ZzXٕmUX:RUEu#uTב:R]GHuٟCxuDTב:R]GHu#u䜤)lxuܴTב:R]GHu#uTב" C`Cdq1yx|~mjӅzT@_8 cE\F%uL˨LosJ^dszL0ȿZ?B8^cf.8O03/y{Z*|"@]^K3\+0ݼ+;W|4yzk_^B)K4'XVeڍpҊۨ1XML\X7^;JSZmޒa#YȺgn6ז3}0owd``hٴj1(k`Czgr0Y˱ˆW`0a0הu9aU8Qlm˫Sϐw*1tW^4C9S2|A# Ͷ%0.t[ypͲk鼢%xEMvۦ'ْ/s*~[hq^AУv|{:fMh[[ZN'Ƴu9ޓvn:|hMlެ;wbM"|K I$>$dHQ#ŎI%{}GR?RB-G&J nL88"ŽQ7b41u4IJA^dX1KcdQFMeXuf.')QV X48[5.¯~).y;?q|&&CWwpDJJ)Aї[gA J*?%Me~̾v%m`8k6ݼ 7=)iqnd,C{晻oww~:-~,=< r]v7ZMx<-W?;i I"\@\a:,8NJh^5jBL빶HGO"K<X@I:ӕ3ꯏ迢8w!W~@}Om`^k+1B2&NiSxhmR4{mR HQ`njKmLVÜ"NJx7*\ґ1eX#4LѴ,֜=ۖSvmar-+m'}B3ALZ 8ݩLb5NJ~YROXZ]ai["7M>Z}:(Ǯ\N⦝1.GVL1YXqLZpxomTSC";W܀—?A#&͵9S(I&4{WeK!y"t\ k-v>,NDFpP3&|'F^D^qjXhz O$ RG1"*-1hP+[ pW<*^T u("%\*<7 ;AQT|ٻF+4;=S!@H6gqq:%ZRCWMRE%RA2u~WPFC %!FXj"I?âX˥\9\2A/az?;=: `5E{qdvz kB~^,M5N(UJ*xi G_k1s1H*v]qu?+Q,ijٳysNO/cJy- H},_R+/:XkSdݺg }j*Pt5yTbFDIۦe}GyGAtч a||hB5Q}vow$7~K/_|.MO/}.zt^є9Z}ءQ'#Soߥ~S~z-t(A=N ICًic֎ߟ] |Z7X/51ذ_7{s11Z_aS#LtAIgY0 1LzL ]u_NO?V}l/VOEa+,\r>2.i;?NfŦKtmTk?+D >1cAw=]Apb?86sj}WTWëij-{kN:/w*fxS| Xu`~0cWb"xލ Qa8B.Iu"K s#^>r(~ǻ[ӌy5+X04./U/̈́cS;ܟ/mP3b,6qQ3m{N3 V-;mk (.<0% !dd:11,2/c4 / :es+.{Z*+m(/ޕo r $xN ,ȈD LQ("qPZ${Aj9W)l H;7cmţ2y. +C6LX&OOZn%8ٍgc9"gxM`äsXxK$BR)l,Et C~lطΏ&|K i]e_ǐ(Qe„c)´W-PZ1F([ܣ?K rp2C_ܿ];Jn8JMgCp" K׍>TrXuBF%zM}=7\7A( / d?MD}Ej;T=o-Ϛv)ö BZIx~I}Rh$ 5ȧ 0 3ojNNjT#<s.᧊=d/?NOGIU=~vvZ?,04>ʩ8EIN3g٥³Ti-kdxPI ?@v x xw?gStYdj/[>s=ay7;d+u}vz/mw5Iո40Jju -]APy,v%u3 @u)NBsur,,GOwqǽdBEH̀n*\L"eKQr؜^Iƌz"Z]hǴVA\גmHCZIӤ\x2),07)xָ0!4VW5Nҹuʣ)]m&OX2a}.EZ/>ttLGt} VEo<1F]%Lx4%|(8RUN{j CC|((M}s4-Gi'x(S8&+C˫I+)U^ʯ"#iQ,lsܔ:J,۝@zpC퓇H> ;e7)+?JQl%{"20"=!D #8 ykd1fi^+.;B86lw̉RINSe\>7Ǣ ˛4Q5 #Zs+2XbC1D(@,(zp!0M;!,StVKosɶyj$·ł9۳ʱu sfK͌S:W:0)Ik>P4f_A ?. @ҳҽ/`eF8ô3ݽ[4>z׸-?y2,\L" 궸 #|giUi^$; 6} jYyrz%^(7b&%81y+9#z"-,21 i4J܁H`!?JMWqwQ]Tw\SȬ|=bD̵Xj bl`# D)a8HHmrcqفt8[galYhRYïZl=PSdz±?`VHym[cB%ZK1BZip6q9"֯vזkޢԷ4mZjJppM I~'qRR\:ǣ#A skqԱ&HqH<4rd5N*Vg)v3T  (MGVK #:} eଔ:F9ct{chOiq01p-P,R2p|) (kяv;iNN;M1Z:JQ[E`3 g%$Th "V!ُw|>٪W[\YwM g7%5ڣm/:̓m ZY K 7N9#Pv[ݴ|nwl^?,IBR+e%G3 [1Ш=$:$}eJ`9nt[ yS]PH2œ0\SEAوEb9#O'ecURYV| 1,IҒj#wq[ )4BtIWJWfƗly,t;y}P#HKr*0SSC$QQB@ر@ȤLƘDwPnn˥ĵtr'S62Q%Dj%lkF <14y+7@Z,ms3Q,-8USAѷlYdKZ()*w N^/ftoڧ?@~pKtɄDu%@&.t/QQtH&.<Lȯg{XsF8[1p3)#\QK `.^t9CBdVOKj"[i|8Y uu=ez'ތo` r R9BKA F"VG;HiJ)N3Qt 23_+y3͑@͉}Vش]n20x@O^{h$'%JupT:/YѬ*%9n_WÕ7tNa4nov¢۬BL,mϦ*x`q,4KsYm-AZhĕe# `{!j;U<#뉸Bո |Xp^y@8W:Qhak!;Ul]PT Q+ k7R[Q#{˩5MG[DԔ}jԻnyl-@qaYw4E]fE_'7Bh?jʸQ 'KsBpZ5RZi)&8YZ0vԖOi6']o5ko-I }TI`|TkpÑOa%YJ?8>;つ"g'f4ƞ*5X.?QxRsxOk-맵.;le+hY N:r9s6|J?9)ɻFڇaCزpH%wM?\l}?׃<w-ryphhѯcYuu͛OUf5{[2t4x (cQ# 6MU@Uz}NG&ZQ"8 SuVs2rH*e %36r_r$>XFkKWvn)jNmys(jPH.J1UD^t8ʠcDx1\#bDEGhVfR!!V!:ǬH#?<F(FHl"Iir3n`gi7:`Q~2JZ""Mx BDǘ:r d%灃<v԰HAy'@_%5HJ-H7 dzuyE̶N''pٻ _5rzTeIdh"a01J8*1X5y,`L00,liж()ΪAaɖ .hf( udXܥ)`h"l Y.ZY(u<2Y@b˖D #-]1e]?椼qG"eUG!yW0saXoC@ʇggl*<02@l:KƩ(5sT"-n9rñީ-o/ j9U_ &F(FEC+GATP᭵ @ UN&1!FR)a8&Vbr^JKՔgZ{_)`oA;g`9Ԗ=3VIVI-M#UEx%/R I#u;n.c,{ha|Hf'pZ+gJx)kާ8xWwᶥ^M7q;r]fUƸ|W]K♐&28{ \.Rp)FDŔ1x +]/\.zv"䗅ϡ$0A@& `+uc,3ƂRpk8xDy}SB[ DN`N K1u3u^섑%Fgl6։uPMSUq.rruh6S0% mƧdb"cak1vƞ օn vԖ Cy0"Tzg)W(.ȭ/e|顋D K }U*`%ӞF ZR=ZUN\2Κ:d 3_"EK.!կQ\a5)ʨ2" )ª%bt$cHG`룲NO3\}?߄b ifC8=Ǜ0(&P.MN͸yU˿N ~:ɶma08h Juֽ:DǻZ%18jת7fI8va>;9RV՝"Ӿʹ<%;ÔUCS:Ւf[W_ \$bW;\ZfQ W)UfWP]ĥU-aWTfzpEZ'+.҂OO\7UoǍGТObrk}o5x!U2}i1ae gv\Q wfb E*gcI:B^&MBZ#j#!8c Y= g,ڜ߳{~߳{~y( g,߳{~Ҳ{~Y= g,߳Z+Y= g,߳{42xY= gY= g,JX~Y=cl،Y= nJ{VY= g,߳NBR,_~Y= g,ߟEa⮘J0*X~fHFԱd41Āa:2:ʢK#ĺ<?O jEN`UL[CD8tc\2' (nMF0* EFb $BpaZ(yCM]%:w%?Ѿl<þ:>}Veq\&cnpwʧt_͹npֺC:}z3G ZAAVrW!݇a UWcuVunk;ӱ?0h 0f+YzJaL֓I|:3G*uM\aE_$SSR_K?ˌl|Rw[%H '.~~Z?*_7W~Vբ^tLGq.j&ͽ] Os^ڰx^0fU 6GEI({]„7>޿3%!}~AVrq镵%ȮСđcŖ ܘ˓S>Ißo;s>P!Q2d*q)WH)c0|$ İB,뺟7 ]]贕 o͡+ڹ&R4hIOP7RR,{H>G)"nI%QJ^eXa/P'ߓ:0*97͎KNU˦Tx9ƕ~|-BÒXHd82@d>x,$RT:kt:22Jû+]}:{(W 7\/e|iǵP].vYl,3f63:[h]]=ae8QlMmSːwbISJ=jJ6 tPhM76|Sh$9xFMvۜےpR?'7m ˔УƂtF͑L[[jL')ŷ ;\x-Q1sfSUr^o頦=;afPgSȸ&j\"M}y˃q;R:U vSWs94]mxBybQpF1IEZFLxi3WgG$X)1IR[)ΰǵۻ*mK>m;D^=ջ#~1I!9~s=guָϥVe@sLEY Eɜ.0I}=xQ33Rb>SD<̬OKj6ZNh,\֔Bf1 $5$DPer:DDX\G&4a>RXm%UeY}u%P .+vGt)1&p%qT4h%rh "HExȘ &>?ug+',SWf;S@93xsy a7B`;ۧKIi)4cG:&8L3Su˂gGButIu܇2 ݰy'/?^ނh'`O{s9uJJ [tlQJǠ1sJ{ƄZ˰MfGLwJΕa"C{ĀLNˀQJl:̕wLrc .h QQʹCl'FEaj!@4 @ B,A  K+ VXJ&5 x[<2z*ʹ*P-}Νo ?AD0ӶQ=n|-妅on~ GH8 @Ff7Lu7Ok_0_c5w"}PU~3 ^Q\M*CIv().\n# 9fp M;Y37Ta8ԆҶI7S Y0˳߮sC5i„c,͓*j@m_S3b!(oڥ{j1Ղs_-dhS:P, !#בPpE _"w3Xeg6q#y撺FCU[ۛ\슓{q;f,Q IqR=R#Qr8(4r%1g84P0YJ*f= .ȭ*Daʢ4*/`(J#PdѲ@O2ש(2LEZ!A@z`oݣC<U>xo AAݫ:¼R L5b?r.E'JR11(8[:U=D-}SN{͐ߺO_iߧ:P2f(4 с.r&29f4!mKQ3mLsrZ+vs5本o6)!ӼKQ=&}z_LG Oa>ʋo9˶iaziz#~!f5̾t_GݬK-疎ZBvpnu,kGv) 9 Pd糜+H0Šj ¸xyo9O#\$㵄w;d^ӛ>z/vEiKw4y/ZWd WHΪΨk8k/ݟ7@T^}? R yn..ekYjfҵΑĜubTUG/?}R;Ï+O/MX5袺LY\Φ6sCVxUq|Ј>RшqrF$ٜ?޿鍓JH>VOxˋ[޺a99Jp^u"ݣ.Ҷȥ4LĺԱ?|pgzJ'NTNR|>gye4b=8Ǻ9AdVy\/dLWeRTaC{mkߌ76McF|ɿ{uK 194}#jހDO]Ng}!IWQ眿ÒwlkH;gs$؞>d@Y4ZӐZJj@Q9ث7uvyJldt=Orl60$6sUtF~'rrBt\z.7Mbrxݮm7^zWoE=>P?4}􂃇xS)fs~ jqU3 ۽'osijV5*r㑈VeWy)hֻO1CP)KV*KT>igbyy<24:٩00RoX,.-ʥ؛⟫UnTʣ8c{?L..o07'RﺋNVvjY`h]/云K]OBH-CBr8GƈeC#j||^F(ZŴ]<;xUt~wSJ;QlH(#;RrתU;Z<\ܔi!Wf6TN\^_\1v_>{ɕӶm0 0gq1KR%K#CH,&1&I( EB[,IUؠ? O>.#%t(oU:nzM|{x[KlS;Kc!Zn9ѨH+A+fG?o ض"Kuئq[q*fE<6z: x&I_n>>t2_FFC7kL\v#`/L4*c}:1ecp}W"`P*b\0zu"JטGreh#ɏ]~0w/һ:HZOU TIn O3NCyQ׷|bӄ5Yu2 +u,3]+9F\G@{oc+5>\rŔ6rrFki5qEu~)q(W'P$W l㙻b\XiA\RiSj zE(epC٘vz –x扁%i(CcUF_vI%`ę0f.iuLq&O[6=Vb0z?pX\+Z/tV;Jٱ(^B Wmzi \E\1¡hG) W=+eGHE0b\ ӂ\1䪏rA䊁G#W1"Z/;/WL W=+Z;\Q:bx+ES䪇re7\1v]q W+1 #WE4rŴSL摒rt"+&FDybJ\!8>lT%e-ôm'̘!L y$6Dv1 _+4 a͎zkӲԡecB12+21 りe´w}”aҟ1wH?\1f;G/W)qRy:r[6= ]> p+`}vgsюul}Z ruߦ,ڈJV`+=tVͤ+rʕB*&b`碑+E\-ubJ\i.j"YP}~V:Br#I$4M HE"K` S6NZP[(Iy)Mt!LJP9]=FT{V*oꝍJ>dYQ`LZ'CfԖҔdեWYʾc%U. B8' )])M eTQ:~Ȥ"+sܥ ^][$iX,dYX* t^>y_=ZmzUu 'N3C7T/_n3ٸg^=9 J d|#'8*t6_ԍѓ1]ISi~iy]_?\Nt?nJ!u=ajf<W-YW-T*;9F6U{%tkWG\oVB4LdJ\YG$W,u<ZEm)=rCrB`QF#W{C[ZybJi\V4d`OH#`i\1bDz)WM<.Jy1njchhe_t,vcx' x(rY*@L3Rm"/!u6Qҝlkef^r㺤?7q.&ժެRYLt1>MGga׳򘾙&e92bN6X?ɓ/DeR7oeoů4DJT(SP!4όC)ޘoG<+s}7&"ǏgZp~9~LkS°8~n˦wGB8TrZZzyh́Ʃ(cTB Wmz)>""`ubʮEr(r" D#W|,rEFȮSv-G W#Whxyu.x>{6X' Yb.=0վI,1%$u$EBAd6r'3 \RU,]HP36 .Ԇ4ҔrWJ:eiƣ 3-vM%J/q꾕ށHFWXi\1eײGr(rJ:7m\1fTʹ.WL9YRSa"+F\1f͂im#B1""ཿQ(W :bZwŔ~z)W^FfU WXiתE^*$wNV8*~,`ss\<) _Lq4FS'`θZ1rlGiÈ䊀~8るEۮQ9U BD$W 6"\%E,rŴFv] W=+ge:xbD4L;?ʔVrCB CƳløb+EUfT0Ui' sN`a'OD7_on@ٵ%`01L:?8t=oHXQ &"`/ 2т\17\P!""`xW]>Mt]RU:+ULsW xW],rūݟbAz)WE5G#W*+ί 2ezr(IRhq]4A"Jٵ-\=\!#~0 dhe֙\>l`Ο ]㨀'jeͣ`3'KS E][sF+,x7~qUTdV>͢C4IvR E&  N*Dp0_Oߦ(F iU{R*06ƗY.cmO,+05NWҶ+-Gwg8r҈ayjusMsNVsͶ&M{UYfs䮮g;uu XShԕЭQWvE]l)jl؆UQWDsYL8o=qK霳դ f) TW ! „,ue{U5=9K;uGTR"u# F]rueg+DՕRXI]yLW&wegPՕ=:uՕXcL*`~MV]dT.2 F3ك'ԓu›B?%t F( 6weh3<^jeoxpf_}?y _\x A0_ 3r. ` C^f1& 4!&J́p <^8~A4EH 7-; wwTْT(`*oDh:+v|ܒ6=XD#4GFK)6$U ͓_o\f_j.Y@Ko1N]t/3A!e ) ŀ(e^/x{l|MxlM }/y1fZ F٪/@)Xڇ0Q3v&NIa M d{xrPoh 2s)6˼=|sr5[w;Wča|1Gf^ ARiS6,Hb-1=G-ܵ6<+ e lex޹guI$!bUuWhLM$7] 2 E_'(V)p*e*R{eᔩ-OvTdUT> k,3G%XP\9uqpHiq5 K i(b,d &^\TIJYFg0}@?~Kqc MV 2(ӈ(O!ڐ0@GF"~@% 3cyFxxĭE_.)I1)B`k#?䝽my7 |C:XwH0H}ӿU6 g6ڼ"B&^"e;y H"0LWد4!3Xb o"FDh%HHI„>cp~e Q+,@VjK&KcKLV0p;Ӳ D~%-Z2@̍-E,ҿg$Jdˤ2PO>7&6$О6xLD X"TH{z+g]*V0Ĝj Cmb#M8Xʏ& Q\ U 67u.- pVDYhgѫ~/VEǗLrn-B#VSe*Of=Ln}f>50qM3۸|5.c|#YYrKܨBZ<}}Y2oY\fkJ{G/W}-VOxP}{ɿ;ɿbw poCrHݻ;Fx&pC >Rݤ1>-i /LOmRe.h`yI™sB7|>٤dZ5|>1l'ωDn_Ԟv M !*ցDLFCFC 7FAz.Cbdmtj:< 1it#\ζdZs!+ma96`K(D ƚ2|.>#HԐXI! m͍P]KX0٦MOS0GMd ne;/育*`, ~w=ԘIy_dE+qU%(9v*I%|Mkԉi <«<Xd' ItO>d 2[#BYgm*ՎL\6KGLm% 2fv۞J9 p)te-_s qHŁv8( σĬH I o:c@nJj=-Djw5~?IY/ܿB-j(,D{$*Ao1&Td2ɦ[zx!LxZ]LUF4Yz%M+x.KuJߋ#์amhz7[bmQFJW͊3)+ee?" <ڣdjxn'2vgCmeu:'N5[נJ3&SHܣz$+ťItƋ{Aeg {Nϒ7s=p@2>; Üք݇sc璆c7vtG0 s0w(QI/Y>fgyMsJ%=P,,| wߛ;F(Okc /MMLm(4sȖ.N;Vx_[t|[:;4)_+C=yj.CY!Զ9Ҋ)vRVO7\i+. _~Q&GRIL|Js|d}\(H,c&K*> Ůx>{iG| Fjޚ*1oEt-yYcG|5(6ޛTLxctk5YoYH"dR.TF0F0b_duipԻڶpJY~rIp\%-.Ӆl2'$}쯦fvkO6z814ډ,^$4Ͽ"Aaipeo(hĄc0`x_ĚW-hׇ~ Co50IyV:ϖo|Z~\םR(sjkyK>6ܵإq8bvT RU JCR%H"DT_Tsq m{!|R= ܌w6ɵٽ)_nhjtjǧSV[U$VjJ &d!sx>1x\cǞGPEaL T9<7ڋ_1˖.i1$g =AGyg\S5uZ Jߓ~I1M% T*?mM^Dff~ʉe2}۹szx[V4}w*d#wFGC_4Y>GthUPqbogdt L=:LqD3g*E} 8B!a(0R U'D,Kk'nǖUN´aFv4۷7rdv`antްfaT$x\wڧcdO{O8R+ MQIqh[M 0 M;dY܏L N %wʩZ޼9p])p: Fwz @v* $3]3oq <}A7}U7vx v$櫸1z&fExNF( l sf Lpkާ^jY_}dp]\_\S{/hD%ZJי <@bքcn`ubMa(#ux^I<ǧiʶ*P%5-ox7OEW18UYwZFV]y&RxQљk,ḑ#X{EE(H |+;sEaɭn>Ԇ%ރkwHc,5| LI 2A &re~sϻ tRPGUb̳CHmrb\ cjbvÏ !ڷc5X0[vsdzwBo;I,p}H$6z%q[UݫrӉßzĈCjF? (QnEnl~E>#gKʅqMM_p8uG`BPs,f`,d= T$Műv|՚5r[bRBd:u?8]{iy9,6H`])U,@¤q*Ǟeڟ7Y nVK agP }B(pNi"BWlR֠NؖRB+1څCgǗY+wT L*_0 -(Ⅺ0]CY+EDW#X(|rE3%NBF@}|URQfrwӧ쏂Tꧪc>$b{lc4SwJ>ƾ=;]lK +41^(f/ӏ}rS0?̣]<)E“,+FTU;>+Q.xIceaE,[k^DPElhV +R VWZr: Ͳ"(k9} }|@ "D=Uf d:V2A#Ěʂ2`Wa^Llr;wR_O Iz ́=jgAi<תɽg95MwL7)i!XF>(V"(Rہ5.'6w3b'hu[XH2=8èg;2XNg;ۛLƣa0s &~1G'@ӨʴV!0*sڠI*;n\cТ!Z]O5`4 hTLnOe/nrCB*V{=_.k]5GWnrڑP֜6BZdi@tAM0e: IHtLE7?7 L]c~d;gL1qB* C<{ZR(Zu'd*=6-Њ f2uմ,*TBeݡʔt% 6 ʭ6+TAI|A=#)OKEb"ĨTԇ**,Q{MZ`RN,NHgY-ՠNLΦXrwQ݅""Py< Ǣ@z:pkmE`ݼn;6aś-[˒,$K6*yV?s?) hۀ+epS 0rhikQ訉QQ" ഋ a9omBz5RN{D0VV_0JOip"? :\$c)kT Or+BHArLj-gPًmFtܶVIw۪~o3-0iV謝K!PLXwDHR}%6&AGL0̲ZYe"G =%2X-X RMpa0N!*U xOW) 2T,#z^m3$PfS=JnmRPB6)Xr&8k(9q{]9 i;O/*a*TyZq 5fgV@A3ZW(I39 @iDs\P f+韝1TfJS;M( l18IbnXXe%@&Jcų ztk?Z&4d/c{adTՌuFަEt) jA:$lHԊ??I L"Z'Y@ jCpSfpsW_dxW?rnX^]!{3DwWgÛkd~?Oi1ǿ~a]jRŏ[Gh.J&>ܳ鯐>8Ǒr[Dd OY>$&m( ӴMNVH{y3q NͪDJ=:'e5JCqmX|s|_IWj@R %GAM\%`?^N`2y.gsٺsp}BTsߓ:2JflΛ;J1}D7S ӌv?A:͑DM; .%d[53}+^7Ru}'Bv$JF5v(skqkl*yOjfr7 ѐl|m4X|\u2 f#ZH_ oΞOx*@:GCtML7uMp8*D-doLFں> W@4ncZqo\wB臸!_n> >FO썚p:'V{挖gy}UA܃u4Fe) Q?~+,emVS][}t7W# H]H9zSPllv*+ꉊEhu7dMUp6__1*?FSR ,Amr#DSlʺ[iC)6о=c[Mhzk{_.iv5 3Iyɭ{R. ~RX*\d7Ğ?.2S>֋ UD~Q6W~/Pq^tZE@uiv4EXlD2!zLhTL9 6ˈPtKMOSj[՞FS }4&`ULk&羱pr>֋2շߟ\3ȗyWWWRDbu L Gv@FP],P2'ʃLuGWbN |kt܊F΂[ V7nA$QE:n=cZ|k Ƿvy+e 9/.\3֝r S3{j?{zb7?dY؇Lkeo;>.-[a}{ ʅYj/TmETB?]t'tIT4ዿzF4T-mюMF՞SWM_ps]kY=sޅG6~mҤ{ k)nl k)j1"4~Ku}_Zb61+9ؼ&ᴮI8kN뚄fMB32Ʃ E (YYˣYe 9('=<\#ctJg /uh#U`V"DH7ec6@`fd 8[kt+l..0i<$ TPPeIn5A֘a7eQeqF+G}4,?ԽrT{!'E 2 ;p)dj߶HcB& nsZi0ui3  >j+ʝN1ù1VWRzC'Mc9vVipmߙ_@wp;<'7LyĽH&+ivo/Fjˆc½`N%qLC'iUŢ 0!)4Cwx\J jWRkծЃ]i5Y\TF~b,j8:`՛ST:vjvʞtJ":R^jfs]e/Q QRB"Uٛ*V0鸵Vւ"-z'l>/MhPӽаMʚޝ$Qr)Euu/Pm꾠6m}B+&+`2{s"_Nm0vd)@W!Ho&B VVu6S҅:΢)_ۜXx,(r hpic:j/9骇TG֫r^z?H  4ƜeõG~CKSyffYW+G}64>}Ԁ;' ѭ(|N k+Ӗ $`ѩydMFXkVh|w8Z]L`U@@qm4M]&/h !bT`meU K.,BZORgytsyT&zM0##kuЀtc[R,ʴM Wƭ@4p MܶT'+$((Jp[:Xr֛v;ģC<;ģCQ=QsuʞݞQv7[QEEEq #n8%.B|r'M=92ȯe H[9){/ۧ^^ޜc٩W?{]{-:#c) !oxI k< 62$,eܧK2)eJ-Q.5T*T%D=)dO%Eru,-NqҤ?5˩.KMV|$:I.(\vJu=ν C)D+n =-Tĺ/oZaJZTCݙUD*6Sٙj`(Sׂi:('$uFHBIP4I𫛶4aQ%%!Dn j8\챽9FM ?DKSd2AVjs2Ze`Iau4ZLfO\bz,%,rؒNgyw.A$]MVΔ"_ !nrqsU=mXYli uB{J[>N›|Cn8JcgwFc}6/.s;d[tp LƽYT6p@=NRuGӺiTsN8A,ľ V޵6#_Yx٪F_vd2mUI[/Ճ!%JɒMdfʲaB#%f-QVjpG؁:z~: 5RX;>܆} 0 ʇq=yay[[hAS__!0W1^Li{zþx G2[c9&>:*cAaB{,>m;Kr{[{,83vs\n{x0=Lsn)`PUOr 4eC~,߾7fȥ]7:IZO1? qEkR"(sDb#B srwE)~]0**%S;Yffdg-=\3ߐ D\i8IxĈg(nx)0 ;a=d-nml(!ؗ(O 1%Hvפhqջ::$G_ W\"'@>t,e\5iW*t ZV1()!V8C\Rf4ؗWDUOV\BppG743Icp%'Z3φoa-USWA@,0V>B򁴅Qwb#E3Z֞1tbOƷrBux!]cߠl%;j.WWp#xfUoR(#<v[G2y%xTX[w9,[oPзیs6Pɞ^y 14h x4JVN 矮k{P3ٍmzIH4 ' ;{r'mH޵;=]yvS6?۪vϘz~;hvj׷Ciիf1\}Wͯ߳zXss3؉[Opn_3b la9?-DlOgPYHZ=7*Voݎ+xκx@8{z{.JЋ՚<\oP%"Z7~9`)+v{=g^M#^_rs[cFBT9W˟K*ݮƺS,c){-@ t/./< *8T{2™s(3lDh. t R, T3`64}vsqd%a aL̪x%Q,9k }^o I>zi` #o%p1H QU n%,rJBّlWW; JMTLV,&'}u`.shE9ՆcE鹜p )UQcR3+A+;wkԬR=VI369 `'9 /g'_>)$ ']# 1@2`U7@}g>}c/Sj^4 YAcc vX4BTZ +GjQY-k͈2U9&V{ +s-%\[1Bq 1.gFojm%d+dmCUp "+BNp%ir92/躪g@P)T>!ۚ Če*-!cD0P*#ݤ93̘%uflVQ]Z9kn$K#`jUNw-ɒn51I(5Ï}Y |_Vv_֐d9DE)l>a։ )0|\Sxġx b 26A?Bnj Ԍ͝07W`UԘYTTZ(E/9mz=jdnF Xm=>055pkƊm? K&nk46MeGZ]b^sR!9lPVʛ\DΥLU֮[mJn}UzMSk8\ѩI`IO" ~_>mqb>V>vQFŏz>Bշ򘘲b}⅝#~?|~wv_c^w|sq!" _oۯp:ȝ/oXfoFlп^Ϸg8ӻs<Szlac33Dž |N7|5$yǜ.X__e%H̱ppTo]N͗(d2U.㏀s&+4c~2(x CtT2-KL^0@K!Z.'GI(V|ӝ%AT#aX4Z &fXhC"K/q&.Cia8@i@DW >!&9x+g'l\ f7:$Nk,vu_Hc_URKK<_gf Lq+}Mғ88Ӯ6 :y1XɞS2L `Wip(Aw/oчAj~lv ƥQؽmRV7k Z袄*Vڮw}7#I dG*DDe{,fj ej{㱹/.=kE63{(Q$1q8-tэE@R*Jd^3ˀB3׵Og1F&iތ5Ȯ+^_n&SՑ2ZD87c>G1?D7}"{̋q2SM{X>Z'ùDY/I hEGFh—3 UcIJAdS&P%uF9*NRrz!2|%+!Rvs3ni& ZhwioFwc9H(ymxL_#c2R0kGF7Br"T -rԶ5:l=JM AvUÊC=ps&x>O$uZ`1ZJuhM` ѐ7xpeK)͝LPKXIK3)pqs7W׳M5ձmjJ M M$YԛF$6Ss+1&䮄n{-'g;7gX")gk&rZO-tv:sb8zv~N7n_^vq?i[[4~ӂ^h>xMoi FNu3XJgnO}Y$DDѺSngieM19E<YD!cnIYYUQʀ*fj9Ʈ-?O*=39Cvz=VIzQz|K5;K|3ƛ<1H:P tA&jVĆB8ȯxv=8̺TDb zw]$}flT{p4(0ì(>< t 5==n4%܎k>)׽gw @?KBG07;`D!2eW1b1 smhD:d1_][o9+_3$U$ ..Nv.3Ou۲[RV_$Y'-5U,U޶ӑ#ڕNE5/#l;.ʕdG1Ѭ` 0B?r_~fOB-}KRB7sw,l$tj %b;y5eQRm/v >Ij{I5ĖlZ-9ߊoK-KT@&ie[NO^wT0uenԚa'VtS%8|e_!aǍqCxk'%%E!Y{.`-=/OCy^3~&ɒ/Cc=|)zHZ=>G$[@ۈ5 f?~ƒESg7GE[|hwuy{uwڂ7xljHb&j/WgExRfQ,#_2 mB&&x K v/gC;2gԥ݄>-tj-:GI(61lc6l5QC1wR&MVy=SmDIN)n͵QEl]YLTjV\A^\YZC5ސN]լ #0\O-Ϙ2Ҩuu{@Ogܷ:Vo_1eh;vs:6qֺE!En=}; aJm`W֠<ÁtbKW(7tWޭw_ogeY^ƫ1Cigw/vHMcȨ;IRXu9Th-1bas]yAD![]\ Ο2ȶL+e`>?O]=6\J˞$:^n&zzY,?&"k >sD}D9jFaϚΚڌֹi<,4ٱ.>ZM 0p HYպgkzjz)[gvқdjcVF0UG 5;nd( ݕ&D[a_'IK%I[ 9d{p jm07U9F=azH fC){XN"s<+Я吡qR!F/*U`z/ĔDoF.넋]WB,oU0kdyUeO:$%&Is!`FH'*)؃/HJs̻ؔb&L'+ohHDZjnRvQF{!)kn {aiٚTU%Iѷk'^0o͗<zʭ˒ `E>Ι9am-ߏ~kZ & KP՗jt{\RUԨ8GC8^vz*4(9/è mCs+-9h͎#+ ZmPyz H3_#$LhEfښH7649]R:R>༰m<.xΖ(U <\'4$ bΙԙ0%S$ NwbhV.59Oo}uK߁z|}+=6YT{tU zQ 6\gWIiZ7!H`w(ƻt8 Qî$ EQK>Jl#ϻg$Lau3bҪ{ݖbsEް"褍7MOT[ך^, t6k ,˸4FI")}PlΚBیըfi:g{+J8B5acy,zSWxJxZ>$@nϊ .o  04JƏ]H2e A}vGn9~-vC\Zv!⫢ypU(@C% (Ҏ XiO8!% V;eѨPƎWd)8^ֲ`'QIH8%>Y8URW_gﯾge~gLE!':3'?|Ic} 006C{7lxws}ag/AwIh3Xup X\,мykA=FN|cӇcw(XF>ƀ\-r0SmkGhY ,V8ʶxg0|?"24|#ubv~x Mq.ܧ/gtw5)lsNɖ8scJtvZ5$VQ5d7i?' Oez`}ɯ㛲V>mxb,ǽ;湸F*{z獧iptv8mE~3y^ {@`@gc5uVR|(iCz(y@yWKA57Dʽ;K N|ٿ/okiIhó?ޝĉL0tRGUTr-f4׈ b:S`A&b&qq,ADd8{;~ ALA}i\+SX,W.ET]‘M1 Aj]!6(NcSX;` S4&]$ǁB,Fᘒ @ph$9A1}b-oV2NO]hvC;aOY#w]|IMEH.Cf{kr#^").9\CbyU6앭 |uMF=}'HByј:#%|7h,(`I= Fq. )Ex}n&6cnyڛ_R}fU+UO|**cĹZz gڦA^"IF\qmjg]4qUNպ}VGh=eGgr%Y &2X#tY"ߺL\xZC?U*X}ߢӇ}Mi|WawV`B1jwwփ#Kd/SxZCx gs 2[{=\&KَǼ.\v&bcpC'DÒUw*р>nB{HQCE.6p({tdq6[=#.rk\GfYbgQ*&&y`ϡskh*ER}Gok[?l:ec>Z ) |잂5U÷p;?~7T w^ogkTuۛQoY_"2,8VumkՑaThEͅN\![y%chb!^l\J!MS c0=%85" @M'.\XiҺ@:%(7bdB -J^@ϲ+2vKQ1%toTo (WRt%Iv45JKۺmD)d,1c' MǖڂB _ȶs$MkGSITV eMCm+= t: XF%_F+sHɼy匴}eg\ɧ>Y}D>%&!Y M,$YM"Ϛ>ކqN9692&WwCl,[PasU";)P>&)3PRL3L 4+ H0<-LRm B߁z |nl@gp3_F4*rhmFq͏57xqűc*x6@c^"d7cuX얭YK`RAv[ dwU1gGlV hs,|g۲]S!:nZ Z$E}WTmhuU/,ZBՄ!#a(I2jКB gI4h3 <+IZ^p$ 5o:W!Q-wE*ܰCѢ55aɎba{$G#o/Kj+'ajČ܆` Z aWNS aavpD:k[zXb_ LZ@ ׶rmX{SnMi^V7MJJL@ZYeӜ!@9 lęY\6 yU;PJoXKR/h_>5 [zOVD%Be%pt##py,C3Gsd"DvA;V [6sgz/N]}JJq9S0=hgwMU l^ސc?V@F悑ݡC@sJM&/S;FdG--QxQK VP8W,xVTMA;Ϋr!;&dɾt.Sd.:?&d,1mfW hyh(ʾ"yc AmI`  XB;kTl+M+C{hwN;v &Xt < =c#%{cu3xVYAi no^݁C;V<ky m6}k^8Lzsdvޭr,yӳ˪Kq-K&ɷ>#<}o+ޗ dƿz΋/;~;o;Y,YGF_Nzð(*hM%SiqC9/eO[M,R;'/ʮ/MZ|?SO2A}l7T{ɤŗ~2&N2&M}Uëg hvD>׽ի"* |eNq?寋q _|xknǼ/Qg$NVz dg7w}=?g VG{9otvyv{=W_//\lm5w͍Hn6^hajfvr5L6a'"rlk-۴HHR&) }rWZ:Pˎ~(s`CB>gIT|R\gd7ݟgxz|t$šx(裳24` ̏9{ OcFb W'kiQB`}-uC\jGmp?8;YA6UY@dғ*BRA܋oMɠPN#6<:A|ZӟLeWd OyNg7XuA^\bvUoQ%y̔ O1JWQ;9b2gPk]vI &jbI#SqeӬB椳VI-mvu1$jԎHBcsNv6y@6&jDn[aX+0mST;vuS&dwQ3bQ1PZk8O2^=s=!61-aHmÜ&)dm'j$|s6mNj\Tv-P'6%jŵ])P;6ɔd b@6ih$j]')(nR;6#'jgYbE:1/0.mHA4dƯY x"}]^7?L'|<{䟍\Xq ,X_~(V{/*,jI>X~4_:|'A[Pgg o@Bg)TY!H:F w oo |L̻|ZK l% BVڠsP J~: 7=D9g4_e;;m-WXM(?khSR4\L_*f0:}4&_|9l: it@NgYw2?uP˴fnrTΣ~>'dꗓ @^^ws@56og3='%b!uc}Sd:[шɀʬߩ_Y\.₯/z:;8G*]\[Jr\W2q'|wڝR/#kEw5d'A56<(3sY`퓙닊i cO2PA^\@6cIcF]kQ#5}|@ƹ<69!fO jx>g_ƿiΫn,]zW_~F5_~'|ֱ~0r27W\C,~v>pI/dzxᑯu\ fyPSv{6rkԻ;XT)Yc֬!xM ;f˙e]Q絾<3ҽw E6i{FCM? ܻDZ43K%Tnf6|^_[" %Ҟ,rΥ@,EJPAއ9R$ma a-Z d7ϕ+yml\U:4W<-eSJ{P\8-=kiO'Xh'J'`56 vv/VFf#X>>Ι4KVZ|ՊtJ |32y:f4DDG%KhWj±{*5$:%j-ԎD*µsJ[Ďo )i K̮޸ݤo^1;ke;Y)S5DV":ݖ;69}ӬF9%&H..x#$Ҋ]'v YrmR 2; }t(@My-aHcWSvO7Rj\d#7uAz)け(1mav*-XBHۆ9#-*+RmVr%fNmffgYs Ә]bv KiC0;/Ri%C@50gr6&f׍oι{יv6"K ʎV bACG\U>5"Ci|UY^8aFI)<+Qy2/yJo*4֗b/FhPIkV?Mth]gSS ~4Z(eyk3.r+d&|ZNYy>QGOH+HŚ{[_FFwW,"$%^F+lZNuSg\^2~2[S ckHpW߶5M.*/'zP*Mdzlg#M(6"+iovG[flu[S"l|b}k$E4/ / Sy2`.%f xj2, e{)U'2D'6C? 뉈ns|>bi?<롲)PXGEv{]Ҋ )?TTHӇ3G+gK* SV +Kg=CO!G),MnPJjr\GZ{Lw5|d#d`R}9[iQk2+rm2e(ʛ T)QR $դѥJv4\d$#U5uB;#NɜD\n4Zg(. JByT}t)*-ŕҁSVw0*GUMUL)9]>+)fUPeLGgՒKBX2Y.dP*# *a#^D)!|{g #}^!+U4VVSjE(pM%*AёeN+ۜl>km.H\XSfyѰ*gļZǹ¿,+Yy bfubY %T<?/yH2FGXi s yEN@R_H_EQHlXya)1WT[}3Xkuz&v^|09Zx.7I6 s0~˩߾.it6chXqzA[\}p^ oJNW_^ws@56ocƂVUxqU6}5LgԷ(:Nozʠ&ۑٰу?d;9#:Wp;-?^X1Q!Dw>Z"ihXx]fydb~FZ`Ƹ g$g;okfi϶as'Ӫ㇀j-ȾhcSޤjH~vsNu@ڐ.gRZ>0 H]>6iɰz{;溥i^䤨KR\e cIA,`ޘ뱕N SyWI11SY3*f Ƽ{7kO_ΞE!^v\o~nxtX%ţ _62/y~dL.Xe]JѮNھ.cwynotazܸ_e,=?q`,$9r#Mlg}4RkF-.YQ |&;4<D7>蠼y`ԱxQ֠w5٠K &cϝKzD9;Y%}v}fuG=W:nb9],\>PtT؈]{ֳ4֚+;RrL5#Nh1jHO m=w`FOƱxىkP\whg, M]NU9y7=1`@%wIPZ63TgmO3B&8sRI|k(-_iTSJ%'==Qל[&R?pkiGCSiE*+NւVfCmvԮ!sԢ8 (W IB@@Բ8O;ӉoVe9/ Xr%M)ok r=jG@Ӛ$W愒BYר]v&hm:#3/I,Zׅyh5j,"Mv-RF0PbjA:J3ˏs/ir/VZXumcmh$CQsH3BDt2 \p4MY.<=?$s\g+|A۸>9&A&Y4 xo\;<(9~y(ZߩˋvRitڻF /88HA% hC1>I`o+8|AK3Ή_iQQ)q|گkT 8Jα\Gs>Ҡev9[3Hhm7F(NAY hm/'^fmy1.O -Z/iyB9Yk1pؖ'fu=jGPZX9$1ƮÜn)Aި]vG}Nμ$jGBuaF }HQGBFnœ1uCrOw5j׍oBvOI/XخJ#y.ی|X1GjE rbÎ 0@i̮1|8ZE;X93 VQ[O O3'y6</ӽ{O*pmYLɜ7-`@|.L.'vz"9u0Aʚ<+"F^K&LR۫<Ο/@ygI2C(FF#ꈲraeT(x-W,db4O뵝_m:a;IxCLӿ]f-2c/'_L%Rq~̲d߼}K2E˟~_K Y3|ĔBns_zrai2q0W\{W9poۿq3_^]0Gj隋$o+з|NLVf Z-m\sp6]iv|3oY͡`~ߜ\*cڻ]&/;P9H-0n=Τiu`VUĵ- ^'бGB .Z@N|~1Q/~UJtAp+߂|h٘#avdݒXVQNX(`}GPSlCXט]cv ]ͭķg˷t-J/߻R̐cdd3>W`vY6g m,@s_㻤mn"w,OwqF#y<>,29Jñ}]7@7ၥCqXu[c!r[g''0*1WT->Yd-ئ_ )[@hb <.(mquji{Bx>& zb C;qEdr4Ĵp9Sۑ|Ok0h甖cIrN+޺ZYiVvZ%~-MꨛH̀ $1W[ӓG7zז[)ŷ}KEWݢEw\]3BGV [2::wN_st]|bˉ0/iEJȮТtBzyy[/CpvlɘF0bCҊzh;! :Y5-67˭kS;2#H9vB DQט-'M3lO %ꛑ8E}-BxR=z(A39A0P!PA۔o&M[EB*R!𬌉k-deL^$EobPRUaEbQR~|>̗bEi!]:H0Z6:RmP"q$b7*Ys*)%~+L*lSș2F>~>Zp2B y+5R\jP<mCeqi9̱5$ z6 Oybd3z.:uN'[F,؇O|uGoKoRbYsI6ʝ[!~obYռA5?:5;|w'wqFT*F]ЍaB5ZN,m`tˌ?j8u> Mh Z{T#2BvaU{i\+%" /D:IeY2B:_J1R%Ť,:oVRsWNF5tTAA!AY|:V {A:T IS:W8;=(cqMO>kzIE^V1Y?`|:g0/`t'fHG#]{):HL5L$h+0. p'VkFl{BIn]RdH:FH)I fފ*2n8@fH.`L DS,B liCUqL~>vpap1f%j)O*[΍~:~'Ƴ%+ŔP۸ xIxcV(e)'OQ$U YiN ǵ#ڪ~7j?{ȍ1oc^ErAd%@<`X5|SomIfK{:dljV_I{g;RR̭J;g2 b uH{%g4yHE19AoA 5M{p IpFR 7$i=:#=8Yz6ET-EJ!Jю0YDV#.ric&{C"b ZCْqHa„l͗4%$LΜ,pU/@?3+g!J 8Z[^yItD!! )ynr&*2&t02bDuqMMh^xvcp:vD.hyWޚ9{y m>BݑO>)-D7u+) -2Lj ͂ oPpWMMd!Ն&9'炗BCeQLVe#6\GN0⎼^cd-ĊTHNd-|%"΂ Bq% ^(-q1F:%Q ,J$˜*j tS}DWO/؈ 6w4}#zs0 [ 7-EciO T8i8 H2&+&x iEAG,p.2G̭{yy Hҕs>de-%Dе(x1fDICnAcOoeŐ"bKaމD"D^Di! = *9k%25H[V !Cm8 K$чK1dz/[ NԜa Ń@rXcr6#0 Y m/vCV&ۈ/p5M B;W+:Z3ӨN`<;  US R6U)%YeoD ;۶R;>96V Ligsǜ L FrRsS؂ @jNzZs%.HG3=" +0'8 9xL)PKRd(d, X2A?x ( lNPҙ%*7Ҥ,wjy6&CN7;ۖCRϛױSBnߘMNHO 5nAtá"{sc/}yj# !^˃ {[hHrFe|M^E)X::{Nl^_{'82C\R$YDDvw[Udsn'r&[g|KlZ9$MuV8wV\}zGiBGF/Ū} Z`'fm+3v՗ |9|\5\dO'BX{ 7j[ }1U;/Hy@! S^wA>>PhZP9aFavozb̓ʔ%=KX@W ͂{9#K !vH!C6(Z+> fĶ,12F85Fxĉf[445pr.1џ(v%2!"y;ޙ:aQ,DdԀ2w|O)-D\Ѓ2c}c<-fR^͕{.Iat=9p빋GNTQufJ;?ꭶo|;0]<JI! 0}B!ܛDr>oKIH*lً~fIǼO{Nl|;,fM}8\,fO|da!oɑ-loOO *\5pia4}B}q ]Ț"7?ٌO4#%~\Ǿs,;INu&yJH/: , 1$],e]dJsq9b`TPk 9BAq+ҎGaPsvJvF 9emV0-xnvȉ m!V" (s^LSq2kkl$k˖:u[) S5€I;4 TãMYy ڶ =;΋٤3=M I$6-VQNA3Oh(*:أWkyNR2ől5@NJ)N,bJ8a$Y^QҔMP{< д̶ҵMjakci%Vle,jl;3Mƕ9s 9ym<4&.MiW{ŭr2.ѥY9l‵hz'҅Ũ@B*%j`A!3D`|wUu'NN[`wo<9;!1{x/`savǙy5y7~\y6"Mm^ z;vF=_^o8j BHah!r~`:K1 =j;e7{^;z'XÔstVKnU(PB2pƢq`GC.}3oQqޡ};ɱuMY9>W_5o$#Ac8?Xp2οJYr%tbJdfNdrhSN|-zWb9a[nGdwIt 퓩;Z"q$ m3c8a%?ӧjǯ;$㼗++g=($DDr/mŻ{/ Em֞ b0KٯP&FV <| 4%[=?PRkdq; Q[ɁRA &ǼP$DR E+D_ƈ@? h2%+:ze2 E lCfT9EJWRciFCY3_gEDBі{jsY\Q(b6vaI;oȡXHM9{y49\2iX F0N&û~2Z䄴 pD8\%D/6e-:(ZH!G?9`peԳ1:ں3{5mH. rP׫.- 8ro{>'v4}/Tf(Bb)=VjyFK:*JQcF"JU #P*+ '!wI9+4̃6YV GҒޯ0HZgo#ik3EKQ(r3mIE9$ze)gfUTHPcb^l9LGH (yH46T8fa&I`>Kh۞v3QmЦuAX̀;}i ngM ~ޙ]tEJ~;q+H ubA㢜ٍpf\T(GutഖmR%,'.u@Kj@Tq3i<y\LLN/k|%R'!ɦg݄<^' VckLa9y'/KKF`nV~%7 ߸ 'U(PBGk$/K=}~}Ya^5oaY|rsmnǺZM]|JG2?aUqݦvb.?4o!ks*Ϸ|Sչ:Ϛ@q4+y+R*|qct|2!|&#>S?::9 ~??9=)`4[VcR|ػazה\hndvPXQ*bcEJ$`^\[)cIXRQ % PW_"C,mdNG˸*Ŋ Aq~}=6njm;>(r;-U/`m$K6@:Տ/c[bX v,7VV;V8zse߯bSO ># kzxGlvYޏ9ԟ( _ҊSjC40+T[ A-b}/`]7޿9,N!u)oHC0F2-ϦWgm ||MDaHjY_ƇjP?h@;LSkWgٸм!-F ˑ=<ռy=V^_Z ïҬq;d. 4>4~:<4gt؟,Z)}{7y:[|dǛ{U#ڧȠ}긶j=2(,kQi}ڬb1m-V6QȘ#ڛ)v9h?Z-~MlwZX́?<E+6p,pLSQZ[1|#; )v|ru܆-ЇMm{X3tѵ} c}:!"V TDBhA'Q(xN-mU#sU3|mn B(Hꈙ jzTzjG 5/jNtǥi1i?kt֤a˹֗pS^+S2$d DgY{k{_{TB1̓CACTYB@/xW 9GQ&9/7YV-0dc J5#3][o$+_85#Dd @$@^ t)Ls$?TmvXٹtJ$E}Ee$1 ḡx&e 61|QhNZ0i-pzF:(gݳ!+ԾZ+i(y/hu&ϼ6s:M;Y"#&U9fH࠾\O; #C T4z*.^X '%X.$C18gV=xR[%⹆S2Z(9bO?Nfm. Պx>g5o|:gz&S0zz.y7W95p*q[w{kZoMήJ" ~#ɵZ遃~g/*G=PE<{}T%{JJT3Yt>N&-Uc_{z}I*x$s=N*%LNe?JIZiw<{({f%C5{.xڬ+Z4E&?뺒:F q7#_ګU]Kcō۪iۖ_>p/\,_Z]b'G5NosN3bcoz6}zś+b;kk^ +K'_ŻX!xzt^t24|6֯b >% vDOA^o8#-Y{ N׬i$VvH&ϩX@ ߨ7Ƿn|?9~<_!`dҜm^+hbW w2V~Kp†-һ!{Lf*%eB:91,v͌W!X3de/h`Z -oa}47 Ft$}<H<)@s}[,]qk~b[*W u,#kI!vH9$.2rn)q Lc޷d gRʬ|3BI5aVaD70V3ljvŬߓH1+5!1 OCzu'Yvbojl)/頽"6G /&ǿ=51Af1}.4|UQZHŃgfv}y!aXY/(;kvĽo|h Ƴ#o#j,M)u6ĐQ$~QCHr힍Z3KD~:norF̯m㢞of0bcV[i=*,zsKMp~&HVtb o )lښ,*mג\7@hqT!aJ'fbw5ߎ6}wa"uC k  y4mڊk Uy;׷xizAQx~c:u`|I:b`~#kڛ;\BT~*AeUo9cMV{PyCtCgTKvJX\Om>, ЮC;\ 6i=[y[6adxye?m+~k=苴Ӳosʅ 1:?U=^sHwT5 :OXJxΜMی +&mZ!<8X,ZQPB 68(>m282@1Ot)/ \ߣ~n?>/#2\We\,̛2x?j<.)]iQ9:K!pm`$*6u`PN773'ϩ.a|Vpku2v0/U_Ѡ]Y=+Ĉ1AxGvmto :{Nd^k.7<ۣ:09s:nC;UZM>6thס]Ѱro?lk}d魱r~Lștd<fDDѕU /`ycΝlvo}'&ϜK"&H}<oODnTkYC*AK Сp տ˺YoEobOLI8f}81:EB pF DH590gHV͖.17މ}8?V$mF_!#CEJH88)̣`p4No_%hOc!" HhsI[P}*+2Ʊd5PZcA}2vrUL iY}|T=/蝥(`+L3XTLy측םJEMk|1ƀƶguQ&=LCƟ/T|4|` 0;{4ijb&dQ2885XTéOvsZv_hR>x3٣hmѰI+5X3ƁRXSl].Nc@ %SNɨ[)و@H]9dc1s-J[\ׅؓ.zb hۢgT`M,Ѥ]d}r6YI11+㈥EDfNP AYep:R~-#KTRÇO?VdΙjʴF >e($cjp8*pLwrK~R+i/>2%Vо[ٍԗwf]_fЬ:6?<Vz:S!6bo`aTUno^g*3J3=c `mgU‹}T)Y4'Ǵ e!GWFvӅzE_ǫ.x!}o8ir%WMS~Jo//oWf9I/yzz^ͱ~ků3_ӟ9{R^^p.u-n.7 w$K!a?q C'O8g0=he oo`N67V5yU>O(l0vN>I{*WFb(І#Jy#moN~N&yhG:עDNxoBvܡݯ TLvTkݜmUlWɥxa壬A3ftC-2qMY٩4,4*aZO& ^ َ" zyC%$|^O?NRrDE ~b#ŢH S:T4QyՃYoCFlUiHX"ɲ5(XE[.J90û?Z-lXRh ihyfgSy@ c La,X`@MU>1eOYw#[9|ù&%rOη$4'ZQd[h2FZ伓BT1*ӧ #\,@95tfcyLH)Y2iluM}qxV>r2GWG-]G|&al"1"փMw>Fǡ&cO?p_?{Wƭdo0@V@֭fv"ˎd;o!= %*N  `r?n8)RS뛨m lY8!R"[]n{V\v 9G(#,NrZ#qu;κ͍_$Flh`r) yaY!m׸^h);lPL*D:9)ÐS B-E_l%-j)ET"R(>y[C.]0tb꿳sS63:VS_N[RP6i,[Uת7zܶуʁU\ \(LLyz SYmG3 ̝ 'ѳͅGg$:V$8(͂(v[PKRgv's%=O}]ҎB=pp 6˧tбV}W[8Gԯxw8HA ~1pT7ѵ)6ߺozx N?y؛`ɱg%nƉipu;@axDtnvt|rzYWϟsѵ8e|hzNUiZvv>ZrYk'0&ޛŇush=MXLugzL^fWd@>{G>k0iV&{WVdhoݵy=Db~PTr[y&yL2Ƿ mQ/&l~␹&CՌ^iXQht<xDB%F²(v1dR`ЯF ۼR)?pDJJՑI%Gi gCcqSWگ.c5_/۫771MW.DtkY`t=y5.-(AȤ!wuGK =ysH3\cbC;KyA ZqPwNdͫyS*`,dk$;humߌjNsjс9v<[_qZ2'Pp5Vsj۠T3{0[y}?޿鋫|~r<g}~Y&> %4ߴ?:mnTr?!,:]:0s%ђɧP5rFe'ppQYhÂX*XK֨lbzGeN|y11̳1ԨFe[ `v'B1JV`;adX]vƥLZv/ڑOzNJa^W*}T\-rz*ԮfR0*_9 n-0q$*nQJ^8c('ωyXüBbtUj DOOniӀ339P%~6~(NgxƆ#J G-a^sglB!ճ/=A`=my35و|ZnCY2(g yAQ]evpev/<eHR3gHp"K䤗~^֝.uc||QyR-X&lڶM04xMIybD"}( AJ]dD!n.RbR%m4]?/5ˏE015cncx, ==f)eԾSǔUTU2DwIT۬>MOʡG9fp\rNv(%Uk~֠!ph?m7 -6c2d"iRLH9pJjW=I/t}Hwm1͗!QjϲfUV>BQ},&;fYmkR {l a=@ېڢksp\g00K=H=2b=QmOv?ݔEbuSڷ巿h35ѫkRu8Ũ0tX}DL].mAd+kЅ QsZU ,".Ց)c p"^ .Uߪ)Ulidv#t<Lj;5qF)]Ú8NP ٥510%(෯VV~B|jڤZJOr P50A$$:̫ }rńmoZ&m2vCd5rKVb^H lf52 PVS^ŭ>($"͵vV nAhB_Xn[uM3bP"HUUܡ}Lqp`t+`lfJ4TR*4lIxr\- ,f4-$ עާhb  'ۀlS$ !6̚:f2!n?s!UR߯sUL`ġMola{߻l6 9qdD I}LslZag4jM0h&21SǁF%r K0}q9v)1y󵽾)wor?^h/ 3ŊFme\#v0~uۦ.GgLշξtwxeY,R0-u2` 6N_$!է7e^^~x|4*u~ł3})*k4YO"-SOjt GBǵB D?XsOd#u%Zp" u'׏wo \}3zN hIdDR=4AM61O&n͑* Gٗ{I_Vu@ƾcqFgP)G۫ĸ^Xn`&6Dp &0r~TRMp,Lv.p a[~t-uǼnǨFD)+}Ύc.'A nw9#wqz>Æ#J#2Ȭ%FXyc[Ve8oQ^Dzt,A]KCW ]ѱ޹"PhɑgS qƢɈשv"ѩqݨۜq**Gmqݝόl8ߩCݑz;RނC>#l=F,G ~Tݏ |stOVn}̉M · \ɗ?ogy$j"T!#Z5t# 'z&MgV|NZZ[--V-=KIi[~NâHݟK-gwrv; G9{ɷ6mQWlQax zΐO.)Oy`6ě"== G41,6]U= lڼ ADxlR9v=Zr>(s\ 9N8i_n_Iu[ݾSMGZL)2֧LJHS0Bw{;TpƾpG\d ށEQ|v`msQZ ~v݁}s\ܮG1x[h|\n1u Oz$w+5zKsю#ό˷3<(Bϰ*&_ֳx hzh~d00KGݽޒ#^u'P^>N^g,?%wYܶMulD/,K{@ ߓ h<e7HL]M>}'9&엵]NFCW#=GBY{+T:}I|գ'rz`%r3EI=JߐkmʗaT)i'恅&;ks3qI՛ħBl8XcvOaΛUԟ\C9:9hevٍ` ^8+bn c9:yɏ* }ٝP EbZIavڍԤ݋vٻ7ndW}ػUdA`{@ 6޻=54v {43-%F#[ZınVD0E@G ݄yDVvGG'I6!믿>Ռsշ? jru*x}y>hOR8֧V5iBRtS V((zgŸnQ,hεP:͵\r3Zb\zL0ͮ.Nz\xϯ_/'MjݶOiYm+]^dиft17Vx[Fe!]:5_VYޞM8O} ӃGu9ʄ,X3KWlwN'%#I;7aԆ.G5{K_o]_is4NxWMsWy_^bil?;,ʙmg6bNn?7G~Xo/7|^f|)t6YLlO0O:h\YAʰ01 e1O/DKJnNӸjq@?tdAh|ЈZ y>M`[ DZTqLy,hH<1 PwfN'<<,X/ŏ[}iā/ԎAj)BKtz;4HPf4JgCvFk5.C [ }@j'Ԏs|#5RGp<!PFZځC1Q;7ftgmNKBP5Ї"8}͋溬 Q'~\N>aT7TSǔ@ǿ=&4Jpc. F7ZhPtʎsNHM# 󚞝a#MFݞP;V_97慪}q PPI:+CyeW BxHP–Hv-57LҶm6xL2qRicLviR*uG\إyG7%X&FӄƵ<'0ݵu]jȠNq.7UZ_Ep3}Gzy"`aSAdpY94}՝z?2|M/~sk&T3Ͳjt)$gI_yr17E) bEVZR{bNޝ[sK}ieYpfx ͩkfK JsF8Q,"ay99o+VP17SD,!um D]>yCU˩ցjuW =T`a%֋e%O~LnOJDEu8AQ`wv` M VtLw5\JWoބnS0$za1,m t4ځxR RZCd5ڀd @Ha8Ba zNY!yC : NR^Yx]x > 3Gn{Vtpÿ́lf_d_3ϝMI7_/]f9Z?[eQ0iSB@O66/ D]]^DSf?4=U.A5p9Af2@ gnx^Wy3m̺~Mj?~u35/әgL{<}^_id7o^ӜV?"9kN+m5>uAleVg*"ʘwbQ2ACuYU(u˼ȃ+U>Pb TȡTV#*[-8'[9I Mq=yYE ʛ&>wqAƼ#S2 k ,{kuQj}a=o08{Yܕ}ējO/CIѓ 6{Ֆ2T38Pe8>/CӤ B{{yۗfὒ^?uCOuȖNkRZ<*€bQْFH8E)ҁo4aDYԊ=&J*Ͱ0hO0Qa6$kAڈ-g!hJ3v+̓jwTb3D<)ׅyJ(P5Ї J}vK ,21it6hvB: JiCvF1QN.K QNvB@~KԎ%J}:Q,By"f oAcԸxs^+D<cֵA+"ڑ߼x+99/>h2H8Hѧm8"1;m0Ƃ[ yy_9*#ee8޻%g٪kK'֠]G ޡ]b7+zZ츧eMpdNyĬAVJ#-p\R޻에◮bw1P x?3dYb+f] x?"'v3~_DAl12sgv3~={fm4 342ŠFf/\[ .ȗﶆy{C+K6^BN),ו}mнsO\1bRЏH[ }LGG@K@(6P #u5i'^/ZRW,S[ֹa̴*lVLQ_[ޒp[.#;ˀ,M*K1NTа9L*xO{x}vu+>U7;-fY ǎs%0́{hRCL=ZhƦ`k`5:ɦs.Xgm}ƐVힰ[VNbdU s6x VԸv}.jg7HDG ]R& [ }`' lԎaP& YW8)ذY | ;xj1Q;#Fe5ua j4P=v$j7s`ǙW&xr&-nvN6GEX7ԅytM)NjNdڱ:s0O r k'ԮP;:c5pTtT}(r_ymuKQ\A楏,[OŻhwSװ㴭~c ciǵ'[Ch Ov $c_-՟`H,4!)i8O4Km@tjJ?b7MCN[:&P+cu!k C"^}?b&iЋ~^*E tz\,dҢv~cqcA_v*`q 0dva'0&i+#I<.jJi)ѲQcQ$z hZwgz<tʚoEٳR&G;9TP,h)0=Q}s*?'gB},s˳;C%$,,'E%o+KqkԖB]-mdu!SO'z)'S*.?NpoDET\$vpXP# [iRiB}.o}{>a]Q7FHF+;]K;Mq8q4ˎPjS9"iW+W<>" XGuy@;ك&mxuNen)WTeqvYU txI{#~ 67_ 񗱟 t8v$q>짐 *Ay4(ʼI Fx a?,>'I|F:L#?oWf~4HO!uګ43Y((0P̬,rp&M%T;4*u)_[Ȍ+?jB{ NF;eX*@ji"% (Qw HSlY l+!ɖOP~Ͱ {応MF!-a^/|U=$J賫{)b ?χh_~sW1qG)7̯^wW`4RT.ë~h Tw? Y߹ iDxw V f?QC-4B+O!M{֤PCcVofʴ4-Bƙ$/M횶IBd GP6k^H^Xy2Iឡ:0 (Cy6[ʖR~an X-pg0ѷ< r%{x-wҷ\.:C|K#vJ4A8`N$XM9C)JmL$ԙ$iu$   c J ͕0|sJX\.ʎb4 !}lMĨ c]Xh#=0UR+C-#y#;ph쀲vp-mR~;T=Kkx$tЭOx:59itjAi?{ǫ-QKWk~5>J[\u 8hPkh)&-4h}ph3r\e-~٬|~F^4/ 'WRjpďC_f6/K?H g!c,d,z~4N2S/GRL P oRVH&JO im)K{Z'_'+Ѩw}PUur3aii>IJs95RHpZE&,ay[e'{H G}'nIDꆄصR&K5}'W؅sz+F uG#-VcA(n*)F-KȟL; ۞dCmcW&6^sGj1OƻJ9+Y.ֶTԑku G*'ly\.[od" * - GVCU"XH+3#ʋ]Ec@_1oؙ镍 ߌ9nIwogWD]e.MN„U/= x$P1pb"'/W>,ηg9ohZw/t8?ަ<@Y/kܕIrO#"FE`C41ijP4/3@#M{r/Ћ&&  #8D҇u۵ܚbxAByh\ma*RdXVwmZp.ٷȾ5;G Bܕ,p hÙq_M'ff5jfV!](c1ZT"ieÈV3+fA_fNC*r)ƿErӾWI??By+K[N0N!D ~GLxsszt8(" α݈VGhbl V&T4O(̔RB]d޷Q5kfZ%̓#jt0TߊكPVZ>]#PFGfx,dȑEӇ6љ_"}䫥1.p廪#E{J `c5"عpo 嫖0背!osi 1ɀ}&\(+aWd`zxyqP'RomdY < g!2XVDµyڲ9ʕ)b,<4(ef+to]ԩ[ׇb8WӁ`_Pw_G3_̊VEoPev+>&3{%K時֨l k>{|=hDXTqe`xi@B-9; !_{I$ynȥ&D!2x6\R{-.!pmݝIhݝIuw:]]wNh 6$ƥئQ1 (FyIJҧ]A[+ۢ^E{f&ͣx)`|AwH I3Rg"$n#0X%"NgyQ@& Z FP^Z)K^:Q9M2veڕC|uz*SK?1 ywv=ɺd3 cӭTR،m/ۜaV Z=T邲hR_E,t`W0?7Z?DPf1hg Ԯg* JzxF$X|X5zxBW@lߌ8>ƪt%Cxfj[ l[8Gi@Q2E2 ;OJwCw"ў @~kY:_&'n&U`y5ս5n 7z3N߬T_`\0wK6nW kƀ `4 l _STw? YU\τ˴]xw ޸DpN?|wb\vXr䔊ܶ]ޢ>XHTn#!+ ,z?Yo*m%unm`U6+̓>";m'ǗO BT'Fe J Z$Kqܣ]ޅ 9߭'ӿ?/ӏD=w\sqRӋwƻ,ߺwy4F\ߗsg1,7a>׏aWsߡV{{;9\M?[2{ӻM7|$zg%Dv! Ey#cGIrf9R|[x IӨRٍktmTzq=|cj^:s46MΚ^8qe8ysQ(g47|z-IB8sc1aW,:>HIw$x(PﳝZ}C|GEy3$C,iv s&G$~Oz,qĠ5a gz ǜb?6EiTv$(αdwj 5fRϋi9ǎpNcM$vv1QjxW֠3NB`5V?@doLA6[0IZ֤VH+dsf.] Edh4J9d@CBv#RId@ ޷uv-7}]go٪\KR2feyT2+ RS ²9S2B-WYg!A|'n, !.Xz16پr-CB.Xmyؘ#O nrvAj}[r>& *srPMv7auXi~F5& 7I}C#7s߄h:$y4D9'T[\Lg"uv;XV9"2pd̡!{>E`l`}v5P9b;-NI "nR,fL}_N7Ҋm5R6.T! w{?'vgv/-J~ ՚e#[cUKo t=JKۼB|m~b1޶iE~*k]MzW/ $D繛Xb^+#Bfi_¬:H)z+HR/Px0XUF%7GPN uۑnT~!$i$C`=~i$vwU7B$őpI\_#2T'?mp|ߩB`>'Ai܁{{\J/ tieǠ;GIKNj&rEof}t#/MC sWK@GKsfIӱޔsVÚleQ89qP O=7Q{ToO)Kz ff,TBZ} =kd0A%jPyu;A=dgVTkH>{4wRd}d?IO@x٘/KCĈ9Fؖ[#(L%/q@;r^ރVV|:e=8PN*ߥs''Դ i|:S&sP)oW'"@b^0[S#@5ԃrVc5Ey6`mT3LNՈED{ԸTQ̪NjInp}:'l54t*U<ɓܥ`kH  ,\jvJ2Ζ8v|3a$TfC%SRvS0X*jXZdKQ]v}f^[vjjtj*;0A hUTfw b^Lf@uQ^ =C[z,2fvTZPr<4&Nif}ZVfW-ȧ:ekkAt[q]pTDW нdGTo/l#0PCK@T'㜑:#T^|Ӓj!mwX.9BglH.wJ `6.:aALvttwom_ǓN~.ɴQ'?mߣ`͛Q#?G3j }J˲3nc k"m{8>(>'NS :RunݰbaKnL LG/Lg(;M#})X;N>~dzK31|jHhd\{RhٽR,u=MlÏR,Fpg|)^ؿ|ǣK: C>|:^CTPcu -h왖̣ 3bJxON!wVZ2YIyІgY9wefvʷ=+D%,"zngmv㛭i픠]vR;鴕}ءնRJn>Sjb?Uez,lU5o>eӭK';=?rw׿}f^)(SD6Dhx&^ћI叽6)8ޣ bΨ:2Z0" zЭW:QN.V hPлx/,Y?_EzWʑ.ӽ^Z{eͪL!N[6}?}1jQ5-tr:ɼ:B>piAdfˢ)}j>G.z.j"o̿Q%3xɬ0%J7hٔH `JZ.BrYlRn T:#oy,JqPM$'<3|3Z|p !l(h ڠ ˤeUL#u}@/"΋Wv+.ӵ4=Q-1X4ސ6jQڋR(Y[ Ғ] k@4P[F8b!6JIޣ,Ё!ű V]9lEKcR zE ݻ5 W):vNK"rv}=FG ?ɉUzGh5ldlHᜪqE,rvո($B8A*767.Zqz}U-ݿ}IugskGN\I_VJ] sXZ 8+a[PIXsoJj-5!/Va&dtZkUO`Xvj4@dVn6K^̳J5s}%=R'rS1c5IM@ C ]"NDnd$^-n:ߤeJ›M㢴M5Tr=ue*7ARքTDӫvIMiԔvIMit% I`h%>2tY|f83|wGʓS@*6Dhи`ڒd2KiU\IK0EG*$X/m|.$ir k1FZ}!7K iX-ȕ' N",d*(%ժNVҶ6v' S5-st)5luu;!Jִm3QLJd~x eT$]DOAAJHfA(drhKax̞=OYH>%7K&-e)){d}L-`c-bYrQ,vq+*|Z^ICdw5Z_F V/6_HJ˚ޚkw|L9AE3k8y!M Ȝ[߈Nmİx?ql/^mj>ͣn4̓m3gH۲v];D!L>SMy[S)~m?eE1ZǦx c'L5ye3nuSۮnc&J#S݂4B[=P$U w6\sْz>O]Lk~tCCO[HƉmڝuGܽB֒D|a9<.$MلaK>4yTE[ Đ6` wK+'MتP2N5)L}hkN j4o[xvJP@qKH$ `*.Gq6#8 mUk= !c^˶7!!4d;3YRq&͈d{%}2 ).= Lby[iAt(x(+h06Sȡ5Ha 9ȉmЮoq7khhnA;9Tqh]T5!6 }LSw$[9Vlje@Y e[C|.Q0"bKS5n a$6SwH"CeC l2! ❡]QߴX}G;u?&Cm1x%͋Hlׯ&Ksn7NAb +2Gjĸ.s|  풆o<*&HECl)}rHYE;Rwxv2*A.AKb2M.&'<& b .h 6H` v6k"sQoyd.x0/^ C=G;ucHnLh5-i:VEYC 烏dgNݑ e[I ފ=کЎFqPּ,-݆1p v'goY.K\N xоpq"ĬsB'ūyȽ՞}J 'gZkZt47=ڀ^x:6 Le\ ;RS?uO9?j9X3n@{u4z,utLukY$5r q&gr5ɋI!.P`~`r~8䨼Ϩ}F;kkgI~\^/1ebƛܟU .uջ'~ gZ/h]tHȦ`+Z {!<YV7f;!FE+|VV(8!ݎ-rPz>7ێY{9ew JW'Gq5}^TهKU7Ӧ/0Tiu$RR##/9WèOUwqVʷ`Ld&y3-}9_DCA;uG`[]VIBRI;CMNQ;꺉DŽvQ[[C ҧfr \Ljh7Xa 2P' YCA>=mhhn?X;!f 6HJv;v(9ݲ̉ZXR%v{v(L8Y;GH%ln$ݎ7[VdNЮomݾ%hTO$P<CvON$~֒~ ߫۷c MLELޖ,`jH HȒd;tj̚d'vD7hZַacjTf۟M7Oy- Ē<Áaaݎ]lqyE| ~-'> ;ڱDŽvZ3q4o8f,E߮ e*al퀃wC;Ci@Ӿv$4&#mqyR[I_^rElAYn;Ys{[]bdLeY ׆}#B{;tvvEƄvA[`Qs(b} 6H8ѷChЎhh,sQTe$/zKlhW7,;کjAx]@"Dg9 6HKb vSsx)n;$odgdW7G#=';uc0&KbL"X"䄶kdA`B;E+djK2ډ()Юoc2s vy/E͋q1 }Q-ݎlnU"J..sI_ClpOV/wQS݇i7]EVӺjkh\aLxk}8U 1;;+`Sfiӓ_~ ݊v}[mc|sa hOorK}Ncr,MO6{KIԑ_e҇ՉrS795p0fVnڢ89`'k>=o>Qhvq48o[m[?̮ڶ;EHq!;2-_)$CxVvoIZM)ISp;sy$ c`Ԧ^?۽u7 </PUz4U'ç|Ⱦ;j2Ub]Q1URKL!rXQJuB%~#,8l{n~{츝mu3Kip|հ~aTI\ :*cGw%5?ɂöxLJO0(˘f8('зj[<%e{ rAkХKeqD$_)W'ڑ=! mP>3x ȤA|c-Nu-9go13#M˿Ku}Y;]l}c  c#n:o{t\p-1OiݥЧ6" yg=,X* yD,Zho=[͍ǃi}b뚗EPDͮ]ĈbWd*ɜ*c!7ڕUwӘN?yy{Otnq>t+})ʟ>ԗC3ץ{At0;kUA@yK_<~~wMD5~}}qtr]O78|0ٻk<| B$Y&O޲(E_kѬlAߴk`uu7wYPhec߇W eZLŻ:I.FV. E NE:MRßP fS\)5 MDy+P)bekL=HΠ=<H`+JR>eA91dٯ,L+[ȧ&,LEucDSpLJXp0o)}QDvϊv~K Av2*ѻX!mCXcͅ T&l]Ò荲Md_sfX|}|ҕooʷcU⠏~Gh{'h]B+iBSӖO^^o|sڝ^Scz8_!23@} A$7F>L V6E$%[S\-RjIIvhزH69UuP^ɉX]coYQ!βw~?;ӋB֫8#嬳I,+eٷuHwq+.T|j\ f- WP^^v3;JdeEB6±OT[b܆J*Ӷ~pUtLaS %WRxE‘R׿ޛo~y {e]П!x+vmsP}u8M %; V.s#8ݞSGK%a)rT M*x#.#Rwb|n%}=x:V! QGNqU Z:hr_Ay},!upX#vD&|)oO/kfw38bU7ROYzI*g+RƇ><$ ʷ~{k$?\9 !vbuuxt!&1݅ 9Ιyp&#JȻol` ,5} 1Wy-{ەp귝~.0Ivx{} F6L9]{㎟X\2x \= ܿ5._Osz;8Z.%qgC4vЙNӳzJ^|oY K*_D6,>q(-*b ovS:E{Ʒ aV6v6i;Ξ]'7$wԪRib3BdIȤ7+;~-3Z줪/Tiik6uM wƄfMs1)%"8}{5$ %%JKݶ;́%*'}_dEd|cpS؟\2:qp!6t.Pslĸ_ۃu%AQLg(Ҕx|~2eq +ZΛa.ׅ82[0,.MgJ(  )[p6y hC^Q"HdtAJtyc'AABf0+O2;?+ u=ݫ&?}Xܧ #hE0|/-z?~kQ$LaѺT91*gkv3 R[N3#u g2=cn@O4T |F UOc{Cn_ ݋?d)CWʏ6j=R4JAQ_$5(ɯB+, s^ae>߽xot=0P{/Hj.9T >*d7]0 %(6Unuui0/G7iH f?_YZDJ` #@;hb JH2(.ei|Y/HrWo^I2<V%„\# ZO;Í뺒 y_[; pskbOPҒ}8ڒFەGiƣd6T(&C'RAnl#6jƣMᾝsr9?^H VGՈ  BtNp;Xrejۯ݁|`&W> QMV-e/4@39 `Sm% ;\-!Rc>4jӆ433̨4צimi*\Ms/ECÔ䆄05rD¾ qڝ>zȳϟ XqfQ˜ i-M]._ޖr;ml\nWҽY_Fo\} Xctiqx#1 ` 24p>4~8ϙBtLqsSsAL)Ht=p4 *׊sAI;M1JɔD*t91A|mB;r{61?Oh{yezBsU;x$\PF Ҳ@gĘ0h!Z*PIhmJ;>v.C;|]/ovݮWnWVR1ɇmE53` [3i0/0 bEADEӪ%x~6 M*$Pbp7@B9Tp@R# \2h;N 3Q:)SY6ٴ 5 xFԄc WY$*fp5jAvXa:52p6:A0!3.QCѦO>iGfߧIp3g({lIF**)z h$'t@#;hI_I`7 ]l;BԷ( ޚʲ';W,?]t_{ᦦ bC%QP"fG IC&Sa#i|xmmT^wxs&h|AyBt% c^R^ՔiiiiijDfW {k.[/a&FJz[ۊano[ ǬA#@: B_={(PĬZO3cˑyk{GeGOhA9񇤹#vD1!YR!X@3h"il&ڙ,J´FDNQ fDŽ΂iЫ-W*dYKqo:zn9^%os>{=[\7S숕)45]X9 !% b'< J(ĀALdG糈%+Z/C`\.bNqN%7H.LT$fk[̷ %Phrb 4l̂A= *>N]\َ\%K!1T%;Wo»PEknF9 4F1AInqDA afy_S#<̰!^ tɦ2{ɱ HkDCr/j,`=^=՗&{HZs &W΀#R!d^}ŲG&! 1x"JeHlO%~ y$A☻Si7P5LkI R9 (P h8cb5A 9HP7ư!=e>_؄weknP#>Kk%_@3SH8Ki/dbU%`nyY7[oVEƃf3D\ ;NlQG+]|T=:x1D2HRs Lrky`G?H-,RkuL2sXcR@ycnM5a'XB>vQgvnKfV:zgvks@ܜ'"7rm[@Ez8s(قVVPz#L/1R !{,P.B8w> !wq16@]AF{HLVB K5{8n,JC_vXo=bI;%0 VaHdXݒdIh">mHde@+e0s9F޾"YXɀKb }zl.!|Z-! @[T9JZ] 2Z-E:La el]g c*E&\K.~ͧ.uF9VD4e]ˊ4A,p}ZԞqYG`&h=CzЃS5C"`0"Yu\X9t,RD&ZKM'nh68H^;O:NT*+]C)TDVc#XБM'OBy-`ZtPRȁ } 3sJ1scvU[xN_"M;kFT :Ň9CZzYڥ=e]==_é&?}a߷?|W7'dyo'N[,].v3 =@2~{)o~{ 7L>Mi?sn3r?sB逝-F3e,߸!N$$cds;0N$QZ+yh S v4'IVh0Qx8iGK  \H4?S݄bjrd!?:/E;Φ;`R ?*X[rle"F&yn9VrAY, =ù"6QA[|&,S,qv131DCkeWA5j )U4At+3l'nҀ8ĀxRV$}$lHAHcDc ); )@#8R.=LsԀ5CFĀz"K>b/h-,_&m)˼4c;%XAf VL?n>tΫK٫lATd\UedMjFq0圌jz72/ k(hݛk_:2dVI۱el>H5kRxÚQd-z&ߚ0xICHվT-,i"S"=L9yXI<n?gԜ/i=<͂G<Jui-xrX{ީݵ/'Y ޝ_hxdΑ¥53mS57B ZL(ʻJ2z1ȼ ;4. iP7 PXj `ߔgp#Fasf!JA*560k57͕̖aCZ솀} s"]&-h ' OabI!, C8Q{j I&us 1:4ŞRjvS9&L\KcjRcTj[{{e\[cj[1}O/>sM2fw~[ k8.¥X59GIwӁ+qÂEVWP1pB[/!*AK.,1g2]/ez^z<$VLxb۴E9nY{g Ͷ5rh.^ G^o{q)\wja\ja\--m,.gF*c4JʿЮ}-+!S1붡+& cv0n@ĄT (쉆 1* Ό͜h4o36sa '4PQlYR.eڲhھw2Bu/v xO |o7_=a[3kv{2=kl2K{ #YcT$P0f荦Q -qbw۾wNFݻe+v'vg 6Z<2mm.¡f]%a>ϥchRl"eпnn6=Ln 70d6s Q&%lׁ\!JƭQ9+UWIS %Ѿ!MCd0F,x{{{ۈ=)]KKV_QsgAF>rwUsdLfqI;)M4pϷu8$#*5|OSp8\$t$fWGtNdVK-4f;M*t\ $# kG=nމI2/'9O:jw?YO+G xO5(m=nGKAsީb;z@ xOXU:8A TC1RSXzw{ >6YL`C 6cӞ BeB}4 =U 1c co&yC($=O` .5fJ.CbN]i\/hƁ"[EE]YW B3>I  ,0*mDdRN,CS˞a^MdҊZZ}VXnԦ57䀠|DvXAHl51rw1/ Wk f.+vFvjw[X^ UI'l }F֒_H&HA*&@E%UϕjʿrJd䤹Kh8Đ.}LH xՋ;"2"#s&{-*'@F=VRmU]v k qkeLЩ8'7qO52T4;SۇBrTVOn4g`LҲ2e 8s0 e?a}9 s`rhA!u2=XmY9 sx6 ԺM@E?2{JЃO|@UK%'ZE(o*;ea ďَi4fM^}\3MpL\cW^PSn!? {{AY_^|JSSc\(r!M]0:6U0jyJ1 g0w[UK)R5ϳT}L! Ms&󬹨i M37xM*cF_ssX+nhNb0K(i|Qn43K[fA߿tFv]Oskx}u1KH2gԜ%W/gWU(v0s|-Zs=dY3#=4^276&ie?#j͏o޽6^/gYo^^Kz?ἰyޭFVhUg=󱞦I /4 'E(j{k9z o⾹v84Ԅ_ԛ2ۚ@tXՁNZS D#L4ٮ^Y+)ІS캙Qf~@}{ 8~fѩPpjԇdeÊD8#v$`0P%7KiO8O/2aTI_{Ϙ `;&4h!+7 Fq/ZGwQ8WV*kΌ(*&qCmC]+3BiUXIce fQO8%3Up^ôQm6fx'I&։O &W 9G">q"rf=NZc$;n>IdĴ[TQ<ԩN|57[KuL~9Fc!H`25[I%P$*&*$igfFQ+f©2fKL.ϫϧ}Y7/nS?i7@ 2u7V0VR2d;6zM/uors@ʸW_$_"a]]}\j^ߤ|ȱ$/6, 6/*gۚUh-Op;M>vY澙zˍlUH[&|)}|]wTO~pUmuu$oR@zCqiR NU?/dN S`uR򯳿, Ug‹ n.n2 =4ܼ?^Grq7@%ʴnoVFpպӋ˳T:f\trDvo/_ 6-Q#gZ'%(~mƖWwM<1"\,0ޒ캛%Ao@:[knH J?$q:%qNʶť2 ([ŃK].Œdp1;LwL7dGtJaVQEm"юYo QI̢ZE9,t@X-p:TpmLII$isdԟei+G:(@eưNS/0)UZj1"GsSXϨ$&O9S}fkiBX˱}$nRuRO̥z.s7$+«J/!v:?b nu^hqc,ĥi*)O&8H%9O̡2'Jd>.^YD=Xi#W^G ׹`y F U`x/ F`6/^*7>S2;],zl e%| ” ˧ϮݨFK ʁJIpXApXpxzxriOPֱO潜ĕȐdSWc(h5c)H)jhud-Um32lY o_->wwD!ղWlZl \_tpUr5p9àFh',%)V #ڲ&|.M tASKhU5H앸\y 2Խ8z&p}^|%w>hrYiE$U*rka߃'Q`2=RMn#KFW3T`F bh#kg+kU͖. ^5KQK*-i߈S[xFD,q)61s#Gf#C8G,( ZL$BD{na%B%TQbedtkKa.f\G]IIXm[X(-)TBONό:Mq)Mi(A4pkWZ>.$_G(zFQesVˑ[qf*"_ǼR=yny<$$em됄}%Ktgt&JzbaU9h:GZaV5RvI2./e} ZA#T'BVM"|qWjijL>G!\>y4Jo*4[|&hE%v՛6ֲ{w3dtjSR%%Bac&! FWRRAnF-KO ۙu~t=?R=N^_(~z ], ݆ex*>Vca:_u:pz(ACԖI'67:\;2bjcy>M}4}Jڴls캆~]A!IhUb/5(3GkHo'=nYjLj)1vjc:I!6nԡ}oJശʩcSXɉP44OuNHTQA7M5fyeP^\zjQ~ԣnnf URF >hdNtj($]4R6"RP2М-<+) O{/ Qh3XD"#G~NOS3½z6vRX3~!WTJܤ39㜣2J(8"6zD[)%z%'m`{vauo*2Rbdvb8$hVevc>z&!oV׀SsTp4X,mP0F/?1e7F8-(kEX2ߟnsaau0޾&0vJxo~d4՟7f_W\@1e7˿~ ąM_m p_GEJ1Ʊ7 sv_Zm?Q*>ςlF\}o9|`-0|uE\⿃wS𻼳u}7 zCp1!8rRK9X0W)cuQGk

㴞 H2jc֓rx ܛ-WO$B8AVZ,5 k4RzM(Ez.RK1w~(wpp,!ks϶mc c@mO=Bg79gfVD GU¦ŧҍ㟹zg}OUhK0}{7P/v;Mi8_=yZE)㠤 3< yyE9n(nc2¿ 99樂Ov$~w4 n{-kĵ z'w_=߽MUo3'hKyVdv?}v-!nPUOteʂyueu?՚TkJkc)7\:)yeE䕭]&c%ꒉژLZ֤:n Ζ")T`FϢ\+{,Dnk=0mUE),!V;B&8JJD Ok,{p@qK)eMF{ҁBF#J >&E&YeeqL j_Djm=Y1"(Ma4 4 "H0WN9|!1AGCm sYVDH"E AElGP +Ib[ej)q˓K!Zq#TԕS,SeB Y` s!8DR˹N' Kx3g/jGXx4Z*Hb3'rXF֠,IBv'[6-X=ۺjEJxc X)VbF Ce 1(֝!n! 1MΕ{Z<T?j}Y۹b?N,3ͦª\@a7-^AdcgG.cVcyj(=1y0)t*^zP*@7rH-3Xt{yڼ"R֒{_2%U A4|DXEs/cr+ᄌÊʺ09Η!Yʓ}5f0zهien%c葽^z+ULꚅtAOg }e8Y0cCia|9ΈDcpH1Mg {"!r$Z$$X"V!'`Nʠ'P>cӫURc[[Fy^3n 5cqHAET< #{HC\DQkXƻˁ>z)%HiW*3͑)ѫI0 w̱U/8@WYLg1e ~!iNK‹V)"TC& hR٠"q<`a#5$kQT(\B`3Ss,ؐ\&`RiVJǠE=cBqie: CLp=Z"b~ q6T̛E"wWfk7$S ܿ5KE2M'>!y3۫E0^Ev  ÛL.Sy>(({n: ͧ'h.[#otl?# ^TS*/}3G6'`pioן~nEt-h FڙFm?X-l+yph/ϥJ "i qʀq+klr:Q:|n(F4RCdq3୉V?dXeӛt2KQa(] "3iXd6"`YfzρT SFJl3yd(jvkԲt>,ݣ'Y'f1V2;p#mMe7tnw_[:ξ4#/bͻ;'d['xZ>d![/ E\X|}pˎ` ~/ TVlŐpB%i!y+UYJA-q E.yTWG~|0 ~JJ|{>~$ҏYh8GR A/d$VLia1ϰ`)w~ϸ󆧝 a603~nƮ9Ǽw$Ht!`ϜΑBkc;k6usnrkRtX|:+G/A>l "(T~d2  0&8 Gek1LFۄ~ 1_rX/ԷuW[P}HH;9DG(m 9IŵM]Mo?^>4BiNSX}:G:9h!ZpB8D 1_p*G 8\&puq :!GU,j0-i5ꭒ X%DP[%-X%NDݝUBbUR*U@(ͨ@;gH*@~c{u.]=eB챐Rs!>g[p&| IM3+ /xf@(=Ka2*/RjikJ%Jڒh"l?[;|ʓm9d ,Mq^ƒ< y5Bpblc"fHp^}n|J)#" %q Jb @k5~,$E? &8Fl.9)-7j,oMidy;HڋqFŮ?X )QsŶOضIoi& zNςJ̳.goJň4}QX\wgpGL^B (FDh Dz/D^'"(޷ Bg^ R^xL(jKĀH>Ӥz ` @/!Q2$Q.ۧ^v\ #R"8*7Q"}m:P1<HOshLFJGHpA@h-0&JjDZ1221IR>֑Jņu$p,#hEQ@.O|I91(T1 \l7+AI\(]ة,Lda/4h/}"BHX H@(v&FH"D&%̝n"@1DJj\.:cuH*_:wQ5" S"k%͛G*V2OyrtFf4!6?H#~c;9,7t{w~[KA0Iׁ PvyYt&C4D]2{ub_-3oW9uZ?$ %d G]Z}W5f$'ϟFʹ̮֠]MBpx&tLfi)Xdh%<d]4C:>jxLN`Ypx d - _i , adE:[3j,oMjJ<Y_,YIºڂN>? ûBֲK}iAU]䐱Yj8y٩>Sm=UlE]aZi.8H*i-6L#.5LJ^n1HVO.rjޥ D`HYz#_y=)6{CeER1{1a $T} CqGpЊ V?)/ӊl8doնp1m.l8G"4͉/;UYg&ݶ=s9_ Rr8:-'^zuY)l6;opLEY6ZIkn5ȆfɶoVbqaA`UˌAlqE]AAUӊ 1UCQ8$LxgV,6d:[e LTG<`Ug<!v#?]#6f2Id#fOq>7gnn #/`51<Y᎐Y[e0./\ԁ^R>uɀNc28{8:3`t0`S4"쏥ܱP]d/Xd=`h=>ۯ6YaB:FRFiFг%-xh34DMvngœGe`ytOVl!b/uǼ͖ۡpGc_:s~ՁPxZn'يd`Q:=2ċR7$ZkEtP]'f9OTN j p\)B[n,\ В2gZ;ӛ=brՁ@q0^:-*,32\87BIu "LNGWpG4):J8Ӛ,TMGR"5jS <8SvaJ6V.DA}Z]}c0P4 _E ~cҕh=`/V7W*&3:D2"=DZާ][+L^ꈜ# g;jDn%C๨BI7\1JK9zc!)/ l?ɀL)x@k)\wABV}aa\_Rʢ_'qC#,khR.zӱV8h2ZtjiVPOҬ37·yξ V6D L[LႭklw:šD;aӖepz+Wo1ұ'vKZvUs2x\u`Rۅ4/R̈́)bl(kvVyî&|~k^1JoxK~~D!oϒN7U__=CPY˴g '^ bbhߤ ߯Sxnވ*F4ݹ)V(_sUXK"ZV`1wH.on.mϏN@(l-ʕ\Zx4 $D]Ato*Dkm` Ԣ%%On,2φݜfkRmPPGlKZjJ!{qvۄmy }Nbh"`;Zr!A$(0K 2*uYcH8LB@`wN˚> uf`5'یOm) I$N$})0w־hO&Sp\(5nJ^)>[=u1oh857W,\\>K}Ҹn/77[t FNqIh]-ǓPƹx 鳗LYIH,TIGOJr/UO4U'K9@/Hk~cinrGcZYE+LK4A4>1dC8Q Sty\<2Vc9(-7j-dA]-qd,݇"Zxv7s (y~+FD"ңȔ hr$~VGvMSQ3iKp~=jQN,6`Jt{|)^a@s|b>?sb~.m֙EG$!?:Njl6\/|ЕyyK. R+|gfNW:'^po1 \ɸ?nzt63RjWd8ܜo'*0jM系F/2>×RvP+l~&O[ gL0GZP~ށbHSEZ S)Ip0$BThж VϢ(r'#5 ~cbc8&PČ8 22&[ATHhHˆ;O ":z?v 铌RW(Tj&`&Bs4t8Z+T_PpCus*M>Vp(»n`jY]~@ Ty1&kj\0Y2IN1Q"YujIZC[@Ғ{jaiy%  G4g~e]HA6:|zۃ†荙"s:3V;[ c )W'&Z{:TSS2T1IOVȠ_\w}ܹaI{/Hj֑z\rd=4UU5Dќ=7 !VD빥býr/e8Kw!Zs@I+J礪^R l992NM˘3Ec/2eLמ<D,g $q亇޾Np(.0]A9iU7A U5ɭ?WG7<,Er06J'0@> uNx,}Pdv˹ۜb -:a$ӯw. bHvFP\XA.̲:t 'T y"czgtK2#)p&m7=: `o{I {Z"ʖQVesRbԵδu_%T??;wȮL΄=PH<KE乴`H8 kBB%P))L.XK"#/6LwA j`vj0@r`@`p 9P;zbص :MZ3@Hwݱu.(exHE!F!0`T:|}я\a{1#v2CyUA8[[qbD6wVWjQ]"{O{N/dqd ]S)^dg/H?^d뼾p{;wf_7ILUPӟ&b]U|{q8gf ն"e"#@Ȱz^I تOVeRovuhCVL) }.o\ ~{% O>@Rqju)yɟxUy|k"=| Iq?;'7[%Pr1yޜl8qlxbu"a-W3-{}֋r !fuvU}D6Nҧ;PP9F3?ilhάtƑd2*׻]|*dXe޲a}K'!i[eҿW 2)u\uC[YZ*Eߒ'KLra g{`+eY=1@as H}hzhֺcڜ,͉fr9/jN_c,YoINl2qS@qNM=&,!b pׇ:5+~a$R4Gj뵖.h*7+"9fKU,j\i85,`z/P EwAĔNqy9.pR V3ƬS9~􂅑1 w'CDTb!9H ! @!`$nS\jW+z]JIsօnJ]C̠ç%T|:Rޭ5#|{~:V: @7T%|íq95QjZ'[iNס2^?caNWi<~RMxzڑ9#¡#w๋@bPw1m޳nlTVxPȖ?(or(i(2Y:Fm(o @YqQZJ1gC& L+$?RȩhT_qEkq6Ac/ߏAq &!k~IHj׈vmor {IMzݞkeQ3p8q\Oًg/>x\bx.SljFum< 8jib@m'@pȁ $xJxqy<%{jJ/S>) &u'M_z䙆0(tF )0*SV*1)T aZԈ( %H bGqJ Jk/V_!%Tbw^jhW)ɴPB+Hbv *̭b5DT4DQ yIJN@]zRD"هۉ_ďl>P3`_,v^ 1`+᧿4Rb*i̋^G= %1,#;{P"=h#wm'ZFvۥ%u+yn?瓛EOS~Q:?GPvvCX,!7B?V֔@Q dٹ\ji%LsR!%>v IsQ[x_A0fsaF?xz7}?]6~poqMڄoaϝc_GߎFoߏf-Heo_!hԍٞ lW5`˯3ucVŪpW78J+"0]v.CXO~]Gc uTّۻO_5d&ZjUi4bM0Jo1ˈDY85vvy hi1xe3pለ{{zW񠝒sa^Y'܏6[m.j F2˓'CP+(N,`8@XDvɠ7IW7A{bGve${1gKKgqYVT!M?xPF)k?v2n>'#}(FoXgFjq5_D0D 19,TIIIA@0n➗p0QG L!oq/$ Il%\I!x' Hqxf⻵W+\N|'7bMJR${(B0l7 c*Z#N`2T-bAR̮s4̳]dI>wX{S&L)JћM?L7>frgbgEd9  Np=N#;`<>SZ$qK$s0]A%Q'zzCK;2ZB$*+BH0_PQ mtHX*I&XXQWo3o18 *I!lK:e]9^Kz\s]1׮*pM{n0=j A=` DٱC4J$8v)5qF0J"6xA1 S9-OX#D6;s/Dz@UhbJ0#%X}J=,$UχX.``l,2\DU ]),FJ 1 M,PL WHزCS~f$ P]Ux3RIR_,MӚ+\C.1P$J*pK.>'eU@|;`'F*NN Bcĕb.L붐b@F upYBQ[E[cdAy?`z ;.:.T'F2^j/K]޴6Ys$eX^V{`N۱юRwה\oUKzY' {7+1 ]~ޮjL8烷‰.Abe!om2P:q]1Zwoee5N_ߐ 5=AX]XPTAyJwǫɂ}KK]jMD=u ȚP8I"I`|_g/`-&q6W:boH`|/w29lʟr^3@R6خ !O?1yAcz(aQ >DPoo|F1>Cd%kۿ{ gAV糲)fw.\I^z]|]v *w. !/̻ ,t?nP^ z |Uc#9l3@>47F;O07b6(­G/?Fo#*$Ũ]F\E ^EED= ]y6m묧63e/C __n޾V³BX7Iq7c3>2yJ @d9ͺ ]^M\VesW_E{ͼeY|&dSLxC!cR!csF!#ᢐ?몙m/DBPx1,-"0in g\DD϶) 생@}} U3!kBr@9๜F6 HM]W^ ]rD'.68\NaT q9.\#tgu9!k `0Z@ x*s6yCC(Q4rIA*/3.6?ُ~ `[`/U%xW m[%S`Gr$R ^5R'Ԋ"AHE^Gr%,x_]l!*ȷѽ2Yu;r d+ @Q. zX :4ZRGL aS%dy[ t2VfU]ݪӢ\ 5 $φeV%/S믓9EuO=xS.z(S`ljb( TNkz)3&UbKIT2TF?;ie܀n{W*u^@7ѢUrV-Of`p?t~ȣYޚ7C;y^&x:yۭۋ #G]>xdK&xz=$׋x^}"ڶ\Mb;9G|Q@z[WN25u7fn|2 'O ]v؉i*8u 2)]8jhd,vM4x6v|X_˟١ξ#X ` fZYg:}{RܙNxVrNwK i4կ(${ k,K˯c;f6fCLp{tpAuw_>^ۗZ#0Dzse7|1޾ 6^&S0Ϳ'ߌ e4׻l- 6X:zPrqI~RD jQ}Q$X+säFEv^FncH &j=FE8ߨHZ]r% ~"Q'ꆕz<ʲ8Oӱyz=M َWMxl]ue lR"6A.bzDW,7=Ԥ/3I8(djًvʎA\`2)@JbvCMw2{쟼"ƺ",u}3ļݷd^eX`5+lN29 2mG3g4n 1k!ՍC1*i\Q.jY:KTefsU2`J*dwz\r'^\%CkUvɊ4 ;.)$BKHy}[Sr8m|ׯPxm/Jk9KѩS|hԩZ%,_GunDY/I("}K+5^ #Ćj]T^s*?X[kE9q&U'E^` @ `H>/> sSAJ1Uz3(` D&*fΊҔ"vvWtL)c&X9@OJeH@+#Ŷ;o36g։C |326.DH(; }l(˥%W&q=|30R /e K*a/RO,|\ R2ao iBc r)>VPHT0s#}o%euo?f`pMp*if c dj@(c c*_t1v=Eqှ?{ƭmO8gl7)6AmpHڒ+y=#YRyOtT= ɵ$77GP Ӛ:O:-3j @2qUؙP8 N(&JCzC + G+8Tk 5( c)xcMPX8ױR >%C\8/EMP Zj?IAt3+.W0!eYM6&XK0KΫU9Rq 7X\g`|j`eF{7qj7sM&i^ɓ b(rng} Hb`tsxM|bl,TɒvÍhl>"i~$KWwӯv!=mq5OFhj:BM %6_vnrͶ.Vl7GeqҘv}ӄzj& ITp.i}hvǖo=^8yB V`p7Zk$X0r-"XEʷc^I(rj3p,*PVBCyeru%qaT :#2Zi];zϽVPsa  JeM[#/۝ŀަg ' s"%ӷI񔘡C goe-Ang! =u rƺt*5Nu2HKeEª!S I-m2d5ӵ3u2-)$ ݍ܆NXOFn(Kl`}QouN헛JGKexܪ>sXKC{ӚiVN:(w+;[TdwvOM/^2 lK2-c[xxA3-V}%0.tnF#wfTүCvWJA].kc\H.=y޺ssc`.Fҕa{C`A:g`-Ψ2ݲ}&q$s{#2;ZJCw"qs-?:w&P{Ϟ l A%h|B1` r{χNb*y9CgNPIqֈJ⻏]ޓV?cT/1::BY zzFEhfdP-;>tt|b?׳B)ڽ!FF:FV݇Pڡn/Ëz>>R9=u-Cَ#vGǓV]TE 824" ',',Mze8 ~lcd۹zc7־J5ꜽBWF ^OqNd&)1R{L3ѱ;6E]Sn N;EĞlSHnd[Z%l[ywY$u}MRw mO*s`+vwf٪N5LiO> i}֤F{aӛ|m!G/gd0cx6 xλ?\]F^ {E <+DβH`uϬ&z!bH9=ө o ^e uyibOk{ J>̴sϖWgOPOJ?Gjݧrv]>B:& D$)LxqX9J'5,Ț[y~0fޯ[ݴph5G>RL-UEhxTL⸾zggnfż߫\6˃3Yl~|Z, u?n5|ӷ9ͮ_7,0ތfȍ@~("֎5:?4]+s ݵM"6 臫HŒwn˖gz}M<}Flddͺeo\&OK<W nq*lNnYLkfcfW=z//mQ2r.*Dg>ޣ?b;|W˸))aS(\I5kDMv\clVmyp‘jLeRýVE0$*0hKUqENrl-@%N@E>z;0?\pN7JK^e O'Kulyb?Y̌VL}H3]JrIs_=qK"bƥ^xF]8+y f{ > !"5 og`DfjEyI/{'W|ZXľ3]dh[Ǝ1?8B+ Yw/7C'*ѳl(%zdG,@<-{zM慁Lg3X,/YY@{3/2bUVy9R"ܴkcE-9cJt6OomסT{gPgW(q.eGPxÚj2H&VJ0D 6)-l<o}7M>sgŅLU!I*PA qڧePNYL`(ٲ/~^RTvɴR- dTx:D}X<$#蔓ZTZЃX &Hc:Xs0LlP.P) gJ6ln(͹lg~^pUsM Г]!ݐz|&Aℙ'8> qϮ:4hN , x&}8x q"E`K:=i-Yw̃GgưB|Jfo#R p1,4IZX%5-**q^L^=Plð^)d>+2D)'T77H0"֤5ﮮ2CZ0q9-Phە֍E%?>֌w.XKS1th`UaVX0J:`w`P )R S s͔ nt9I !ͭRv)Ԑw c,>&&ԊSEP>MSQ'ɞ:Cm -Թ3ަC%&Jd?4JH١<.nnonaƳ8/9E˻wv/fc|_ZšJNNSy$NE֔n^CDt E'%}w7řԲ'WNp NC[l ԼԜ TI0k05•Zcy<{i5ܫ.d=dR'pwwzG&o ݚ<۩?oU;[&jMN1ЀZݧ@Ԯx N+ g}b7tэ\X_&7, OPMnW,ny^NjHqy7  9 ~ȯ1{IƼW6x]h҈ cݯ4b.i*^f-84! >zkofWݓE-`"KڴխKb~K, s>a2ٮuYddO9ZW;(Y:䦜ЦĊOQ񴩺nJJِM+%*dlLt~c.W.u曳f*7{OC\2ǏD7rOu&r%:,>C5Sp'k4R&;fCT Z֠5ښGаVT4<@LPuץ!l.Ug񯅓mcԬfOAtI C 1v{ש|qqO;,Js#:˓ 'zصNF, 83 ],Kx~ AhmՎ ]=m£0n1eע s1*ʺcag+.ʊCeK*DV\J[QZV#{]sow3Rmj?zWQk ZDrB.Kwa=TtoD3,ݹ㳻=0R2Z}7eΝQṶ݊vYn@ﻓ1,Rft/#%qjJdw,/zŎujם3c 1wUڪ2 Hut*8yp:k,*ʇL j*NyCiR(ه$jݪnT"VRn{(&aΚS]!8QU=wڀ &c} Am]*5 *Nށ ZeԿ/p*ݓ=PV3^wyƌwy^yPpqW 51xBE 5)"%NRuja(vzm꫷ZbQHdYG-w+ٚPk"|OtKbb5 º㢆K9  !M)֖PT:+,{Ri %Q |ÇxƤ;j!iJV(b9$bBgwq#E=Ai-H*6㲖O+rkߓՈ0ngT0XֆRR|"a9zʥ$T<5 Va:8͐"5rRkFE3wkߎل.QVEaݣ! CP̿kA(G0h`k@Dʑ(ɆzS a.Ha/O~aB-yuJ;PQzrw?P Q8\Mߵddž'LܕA4)?jSp{,)kb2= SXHHTx=<9_Ab0h(FHpI9L: i y 8޵AԀk!V1ID-I}V3*smW^E螉'{p~|ػ6dW Fŀ9X`76qP‹ERH 9i(tWuϗ޻*6FFҒ@0F (U/SE$@,'fB0AB[%Ioo6YŪw&+H }y1kYiiÙ.AMd1JͰ, '0g.c:KέOŏn|;Mc?Lc6>.oWf: cfza^|ћk(T/!{.#Qv8v[0Fh,0W(%c< qsf-yG9Iby͝Ҝ*&XV 5B)eHՒx*eK!E^bd 됀gF{x{AtG2reI3rمu3r- !)FQ}MP/ryP:c4nAl-кڭ 9D0Uqnh7+n<1hCF 7dZW#h»iCϚKwɑ`d҂AFm ;jy ,#YFaV >)QF5W]gxH3D7B-$c82@ GȻYWhryPGn'V vmOFִm !)k=Z ZD<1hyI.nلnm !)!zqnj7*I/[&jA'sFQx-ڭ9D0Ewh\ԁNM!zلnm !3oX]`7/Hpz3&gy=ή?N-S 8Qm=CBy«9ħ[6Z&޵H" ^ʩv[jZ붉w9·W]X׻[y6qMC\xWJK)*"%(04c,4{nV+Tn~Ԇop,@|>\{d-pzw0D(Qq,/f&bfo1J)X/~ĸEO(1&1l‚ v3Ō2=|1 }g#UjL2 r懷o3}%64VAzyqe&rtm-Bœ"xw7sn|N<ע,GZM WE,oc@?x_ 2rrU* P#iYUO@PwjxXL^9_պ^yth1rׁ';D\K;L!W!'AE  Ul+vHԹ.]?]Z,]!RІIt}f'*P|J"=Z!Bn0w$T PN2":2 j7 m:t*##2JIt0+';_ IE׼Eܧ=jn[Qs`&/<$r<(/JksLزXuoȾx&wcXw9I|G0ӟx;㦮i`Vgªb$|L'ᛅ[i A8n]JB)aVTX|3>%YUq578>OSm%"?wHAs= X7E|:Y)^Gɻ?[&f5UͦcE,[ |:Kkzy8eH 5| Mz62?M_f nC(ӢPBG4VWBQ&P(4p 9gzMp,s_P*";JSJ0΅_~O z6M%z2BEqagqgJ3mJ㤼IZ֮N*O2kݲ2LVBdS8׃ICK?ӎN]eΟ4RDR0f'5I g1ԁ$ߘLKV82`4muc}(?_lk4(]iKD#ͯ]@0vQ7HHHr7M M7#S')#GHبÎ p0x<0WRџ w׃u؃ >wTzl ]\-Fqm^r^|],fMpʡv"2hZnG7BPS{pc&.Hf"n!Ls'"X 5LRE-5u=)( ko!;tКh$rcuBj/ H{$*5'-_3Œj@pԍj߾Z{CT*'qٸ~©m| N+nk[ e1몾NPL`X͉3D||1[EЕOlH]^W &,^*օJ腒L,'VYuxt]QmA^v%82lLgc&o(>'E6ϣ~?4/hϣ9)`/Sh7%͸zxuK{lj)&h( 3Ƿ`ZCZk_bMy14-LZÇLy֙nPUO -cLQGi$Gxy<.T\/b^\W[Te.,ҐlͶBB-Ur`EG(qkY<ר廉Քf(]ܸdsNjEG:m.;˟m Itʣ Fwe}NIl^5 i;b++tJu!/DIЎpmgxNei9@ jٶ0NF91*DWz4焳3[n}͜<Ԙb |F}bCZEG %PO3BiL2Pxccy5Q̄WJڞ%L i1J E) _xlgLJ0BJZ$|iWųmj,qzfd5)鵜_+edd9J;f+4`;:Q ωfhͮPqP84# *n+sJ@dT,-c銳\g{u9C-=NyEЕpw[}"Tw[uj e2cTa Ώ\뾝=f[!t!D%卵lu#(JeeG ^~0c:akvMgh^$h:'h^TY ̈ N (Gg\jN,vX&,0 ωFYLXeu|g43xYVeyŠ%^(s>=L{"bQHR"w(pXc-F,u0?~`f9 $=?nw#<G4A2"Dg$q3m2%3ŨY9 ʑrEqY^_rzN!xoaݔYz[6`를g4Op;#[[L>(.-Gg7C0bhry>wqQ?| {XN&4A*ݗWsϨTp)U}Grq󙊤3PS,sPC=Bb Z6@~A)/U-z;\`3lӭy7Q cZa$ZkSwg22O8cXr) O6zè!N#JcFōu;b΢hECK1aךPj_F7=R'1QJxlH1@0wH_將HS&ns$ܗ,lFIc_[ZR~H8 ,fޯq G2`naM,QF&9oG_Q&ύL bY_!FJ>\<t:ϝA^gFg""͝AO[iLˇşJ<8v聾.${%L@Psb"*{jcqewx>[ISu>>6QS{vY|wms$S ME.";5]'bMcܾ/{sφBHCd_yt}x+oz$Z^ya32Eiɴ$ɔy:YT5F9 )G32u& (wpAޏIF|x"|`(c5*i'h& 45ԁ(\qD?8"Ue)kG @LX *;nk$8hfL13*\<+qP uZKۺwjJRPwD3"_E("bp_|qsЗط6Jto'?(LSBL++SwHP!3=Z(}|D%J1jWy0;aXjz# ?ov>Qf&>> 29 g!PѝWC lHh(Su-Fx!yH5hfjuHJ$GErTga$C-Y3[A9?3Աe!0݁(q7N g& hBs0|ԒSAL)%X7ۼ䄇Sk{eQFЎ8ʍYZʵǠ0%nZߋ9l݃>Zj (;A>v^L]7IfbxK (#%OɃ]}r<$1ts˧7iqpj.v^rqU+{&8x(/h4S"3u]NWsJFx2LOwvsX?c-j02jI=}dKbAw<Ǫzll^f@{Q* )2v}|]}j@0ԙ`Bi7<7Sϝv,Vd7US]X-J2/nb=3/E@w hst>Q x SMt}> 6%^i.a ޼Z<T^O^3=N& lhA4gvҕ s;鐸^@SIRrߜ"VP2n&yȥ)fEB*e9ɜ u(&m}s+81b{.wã#x63,!r^gA3`֬ U׶mZ,x5m@mXbM_|nF@)^ހ,sgsHz-R֖Q@P1 uw&o /]h"tF ڨe\eΠ]i[NˌDYpDbk1XƭH #7R<@p"u|& H"_F"={=Å(bZ2_j0MDž }<`&%9E>EdQ2Nh.-O u8ws N"UI.f-Tl=H;6I7Ut>*-UqrNWu9ӣs~㨑 ꫑ύ#ǡ9Uj\j  !OA`i Hbxa纬w6]V1.lG4/ڥv ͜ha%ݪ9lg֟jƄf'|Oapzjrx5,*^oM8[lcMwVh=Iȟ\DD.>nKn]1Hhh}ѕk[tDևELU_yLL6^Ѵ[DևE@XLjңeh}U&DqrlLJbs nĻQ(ߔ7\o}@'kI[:SfI1md;jtL*?FоQDeJv0Tvb)\BCIW-` 5VfWL&,V|ơ2d);݇$9ϘufedZ_:vz/ߴ!z1m^11 )ƀ}.7[4) %xCYa[9}ՋfDd-Mηv&[|{oO?Dٳzn)!8/fwuӽSz7%UXԈgbq\IIO <ӲXɔ 0N!8}ne{ݸ oTNS~l4qBC=^aq ]9M*?RXuro|9 `_f׺Ϊo׻׋*zu_ /fD 5ؑ>ߕ(BԸH}R3ɽf__o1]nֲ ӝ?j6n(vǜa>v$ů"1[2T(ͯ0h1ν/oo=|jw wὛ[8A ?w""!NA餕NLbEAwdz}hc # ^|b}A׮.mxfp2=:Ɖ}!ë8 o>G{탆Y:F-rhˇ n$o8?{?8^l@U%N;MG ;_8e l-/lذ׷빼;d2$MnfՃӼ/nSx7/2.D>W'lyO mexw`7 /OVF#Z+3RlOZ.2C?bLs>@{'GBd4 =le`plQ8 l=_'H Շ0=El>=Uç0Flǣ;s)29b)tl23MϊLbEas0ݏ2)60ڞOu%H5.BbnNu3;sr֋FТWNyeg{'NzhՈ,|w^6_fYM{g2< ȭ#e ܺ"ar&kl<;HD(2nO`RL[ s2殰c)=~`ZݧKWsMp w7BXv?V:Mn>Gpƒ׫KU 8fҭ` FhzQ1>jt(7Anő(K10:·q;FJU *W IElsȆӅ_>GM Yj$@cE Kpo1t[SQC ď ǍN#HKyщFQJÐ8pD )7)r@<åω^997t{,=F,Qe[s,9y#&7{+%/JdU9M s0LiĺȖ@B4POa(?+8ZC]*#{ND be⥤Y*׊Y$mV(TJmN(Փ VIkV@RF/} O gVH\3+,*Z<3+ZBe.!lTK+a.>݇WǪ^]˱ZlѽЏ8z+be.O%C {' ] п/ϣ>9m'{Lqbo5}kѭ(3f8N8cJn Hw Ё ArZ1  x?mM{=^? 5gIHT\ ~;OB\K~%f9PucqtEHU1ǩ(zx<> x-X;΂d-ֹ7C4M un9!"aI3.N硋a!8I튎mPP"u곋s+\)>>:Bzp&-Vh"MOxn--Y!,p0=s2:+BWu31_ 8N5cgo&+xS7 _|i0aV~S&ś2JK) p\hɭ2B`d@Fz H%A*$Qdz4?=- 9PnfE=IlCo'uчń!^tj ߖ6'O(f&7.M5Fۻ nu{3_X0Zz+w拙,yk-tNzDr9eѩNID l~}on$Z)`pZ0ßQnfi5Ŷz^l sRj!ĩ =DT26 Z2k!"xeyUefGygLW?:??M64iZoW~R Ĉj|*0ԔGFNh;%x)1`Sǐ2„M#4^Wu, !-0&T0iRZ-!XyY1XHgL(^(YЂ%/SnVXUbddCsqIIl‰5_1%WRUОFsW16U Ơ8h^mM@QpR +`i`,8 3!Ra&%Y*!>HЀjdS s"A^ru);Y9Cc UΛtbRSI8FI6P@(0NM$e\`)R2IBo [(AL !\Ġdb heZ b2 !H%AIceq&`H@5g*+mP{탖9p*ZU5SNbHǍ%kF (t1 8H|EA@Oi (T36r-rBh@9~I~ _s;w=8X*?}dq-w@Mv5w=bWs~?V.o(F0{#G@c"Ǯ 'rrE#VB,+y]u:[!*isBLXHu:!7Em 茒2{Q? R>*$EK{I;Y7$0q.7K R}CeeG 5\P 1"b$/7Ę\oX|!&F B Ƃx\Q5(!N=#FX 9 Ȕ&D*X&O]o.7}C~!`P\o(.7 ']avKbzM[wlM۵1 {_^Ⲓ@VjElZQM,7SE-*p&u\hop+]i+TY@q{ * ȎEJ'QRfdQ,4/y.HqrXk^ L`šX?8=l^k(i;Q]kq]ZiG ]?kk'::k (!1{^юmw5͜C ?lcJw2[P]nz{u*0gh 1~[`Z$ܿ8Л1^Cdf ?;]Ttл`FpOiVJ)~3٭Th!^ilՉUB;7nKOw;_T0c0ۡV}Tvxi4 5DqTd|]=iNO !~(*ɱfBr4kK껳ΪΖϮ3y5kxG Ɍl u#2=*w6yUq'@JNne^V/I6NIfhl9ĨNRgx"=FFx&gDӭ/;AJ>+tqLລzr>1(Cs) r%i <Lã&7 ڊzM6BŮ=oy󷀳b:*? /{ [chp3`=ZfR1OMTvt14pP邗'TRZAzţr=0n'' _-`k+=a8)Ӷih~ YDҶ3kTQ`h\IWue;>^$m[4Xblg]u>vg3T$::1ow^Q0gwADϧ)#/5@@2ނ^:4woX _~ӊ1d{Y|Fd^~I37y9|0[p(1DSGT q,4"Steks2У\ءZ%~ 0F"$&u_eE fP~zljwX!`IQЄI,9FnFn˹njY2x :h.:2Dopt/:۹µ=v1MFw@5UJ7%RFFT)C% A s(ae=P5ٳ7Ƨ~kHCbz'kf8ϣc斅 Q=wd"~>}\2+LꏣqIk*u~bb}i?wFøgQcX! wnxIG0`Ļ(#"vkDz~Nd452xJ(wY?Т(*sp]g"L*4t F78,Uo~ :׋ ʷYt@,E l2=y`c"W{H gЉL 5a22.sѓSTȶ<}>fhHèt㣻,5 bﬕed0P߫e3إJ6K2A00LL l"yTqq"xqVL{~SJsuwToLvM#eHP:.M7`PrOT,(O2~^&Ly󲬟;I)C ,D@e2s} uw3v+ΛtЧ8pǦ^umAʁ[]m~@ dME~VYU2ߺ(>|VG^PKCQUBceN+s^f(՞QǼ>Te@F ϝpF֓mjW]nKՆsUxEyhi*)b,']+Eũ܅`cLm ? oN dk9Ddn½\>7Qi>ߏ_lXWt ]Mp:mu W5Pw_bsr2*d&yץxFƒ ұ]5>"wCݏ5M4b)<'M*oKwN}t5]m]WP'G+Fb&Lj޳=Xjt`c4Tj @zPFVxF-S~&:EHnecTET`2 XLY˃s/lus9䫍8Huc'xjҒ3g_iïfubp E1;5to~RB?$v̦u#߭5+HҀ/o͢SuW >/*w&_ʒ6HbO8@?C}j"BɁCMb9(IљQ tY: *Q'1_L Vq9#ٖgܦ|[Q,yUq 2xJ:ݟ1ś13'DT  xbr FWE# \t_4)j;FɦuaUe9 Z<$U:*cMط6]jdb$'K8Q#yꍆƫoLZ^i[abB`<Y+uDY/SyZmK'SŢ^4zzVQT16P1Oõ:ŋUqx94m$ZY2\x0L+f|Ɗ|=o2`:Jnph*#s*<_ *uwsV-!.2,ejLW5⬧(*;S=Nm'4/5F8.@_5yaL5qZD͸W^ND3rN$L:9DlrRMNDk9( b8mN6'k)P Yi-n進~;K-Ѹt .Tc*5GYJ0N]ǜMݡ9ʝf (Q(VIz^1);Ku(bdb9(.\jY =i#mpc0>h^3X C&_x .޼}sTT _`ze=@Ƈ="E$^ɗZ m3ڋfl+=J &ÇђI 2fn`]Iv.5Ra mW&nOF7&\gٚ7zzoӮAi|_MdTﵷg UNqӿ5X'?wAթ](|hVFwkBCrSdM(%ZGNjm^s {"iiUlZo(jס|C='6ش-k quU -xs"J jF ϑdޓdWf4&i$}`nuȖFG1)۴|"Eɉ$)z{Q \]KoOŻh+p)2j_p%7~Zye_ÛƗ.Nk[$HZ &V}AIȭ[9v܇N0;$N2g{`8(?X:Z#AO"y<4sPg8h1#3bv ;‚9LJyF60_ivy-0/!1*JZ8]IZQ~,4j%)G)D%Jⱡ1)ejYEB1@%A 2cTUUaV!0KƐ҂W R5NGm1ǖ,JB-`"bB'r&2oo"R%W=br dS A=p1FQ>vx¸Ԙ+aH'nIar]9Z_bZ2lǠFε®~5m.LTwiVOMPHBNvhQև-QH0L G®P+m,jɶ54t @PqfNv}.,Ckca?zGop@af7)C70^}S~D3κ~jhksۑ hIjNC;RPbeŮtŁ0{BQ(17'}5;b$lY~&+쭖=cXZSٚZ,sd56,ۙvHl%naVj;-m{5̽lY%+3Cbmg_7_,Fc(7m"dRʹ+#K0"GṧeVqG#"}*:ndNzH4[A`kUD2@Da.bA)i*GV#F4_]بP5-R6}d:5UT!E=@#!S;e#+q0ŠIW 1P@l7X<n2oZ9JmGg>_b=cs_w!w'1Jլo,y\+Q~7yF{ Z^~H_^~21ALW_:?#.bˋNL;~w&aa9}7H/L1"$ K&_G{ *z3Z ^кu8:s)u>;yS N 9kBZ:#ЛZ %2EŖT 9˵8):k^`=jmzΒ=BQrwK9*&|1_,|X p!q(2dLIq_ZQ'Sdµ7K`i0E./8)qR EeXM.QC\9.rABU1afKy1.d%H;v;ij>0'gF,3; A? f'=tȒ2/{_W-`'[xy齄_RnKk&tp%kkED5 U!%tM)fuflːM:}ŔU}xTPE=ϴ:IqNa3> XN惽dM.7aI}ĀAh:S!PỊ6",(b\t4H)n1s%`wfݝf#`$G{;377F^=Rtk7V6WdI9~o烡dp>>mtrd{͵$%{5zVAPLjtىIA) -LR`wW<Ȓvgo*/Fqnl;y>|νL?&t tb#܅3|ra/? &!%(M_N^ُ`e8]~N^tl0F, p'8լB9qL/_ 4q9Rv<}/nunzpy^-'m~4iH Jzd46%v68pK#7Nz`u{d~`%Ln.S* x/_3 ~nCv7=2y5W _##<zǃw]s5M#X|a ,۽gn0ccΟ{%`.?Oa2go iWc{",mwf3\)z~`Sg9ݿ+{?&82eWêhiZ 5`A (ipf!7g0׿O`V^)_ -phYfҜ$Gׇ7 xI`yKBvQ 봇90:͜>TI2ViksLgUV>7ԦE J$|+ȊgAYD  VwvUW˽t+Gb2e]. gd$EQ(o}9Z ̈(ϭ`K5֮އi%Y_ F2+DZk?[Hr $_{j&fhh妩EGR?-hո%`1CȗC?:>8h k kS53),1 G cTQjPEE -@ofmQ5(eCAw6ol-X>udB4@"F\iFE"(+xIt18bu*^Rͳ8UmUP4+vաԞ j]p>P W$ĉ4S/" Ji+]jƐPwrmuVkehHeGXa`[M-ȊuԊ 4),ߖh*(Fϛk6 RJR}NAF1ۙ(L[Mg[3%rxlo*QL#;ߖR5+󍽆$Z2%DžP[]cTnf77/iݚo\DkԪ֭5mF 7hҠu:xXk֭6e[2%x˶E6b[~͵e(raJWZOX[8x %{e7ʸ=oVIp)neB2Q/f>&sŃ Á=X }/%W+Ҳ{W<9Qw WꋮNql잞FX(Yp. \7PYĂ;8Lwm_Cw5e7wn6NgMIoN (%Ud=,Zbee>8")8OS~6\kї5&Xv~7kQW!?9gyy[yΓWafY6 *}ю(lփkQGED$A yY(dJFtȒ1J}'ߨNP zzab^2D@=.NcۙtQ ܃ x~ ظiDqbA0tM7qB 6|'H] xҨ(-ʕw]!p1C\- &=޾OUZ|P1{IA$QzV4k+4b>"1$AH^N:mj5'9T} ??BIi.PIb"aFDc+Ocy$J+.֍9 DCQ˺n$.9R?\RX_Ƽ ڱR0M 0+SR NJxIަzA'˼: hr{t7ױD{V1JZT!\cMt" A F;Z ȵ&!Í=eoA>ז2ȝbRblt3hyϫn+!ijn&p +i~:_:0-~yh35KR),Yx!K5j _@5U˧]8: Qv΢,ܻ=K"w OZKxXf7k^D&x"lq]Ex!\hG <Ʃ: $Ȉw`k^Ħ$QB >XcAP+u.S2/Uw]1J'Ư-ǔrGvO-0̆lrXQVJh6~Dž旍.apbf{+qfX& 5NBG5.b5$3^zcUpuQ' ͘_ akvERrPxԫԗ%ZwW* I]oQ'!RduKשʿw܍]թ Q->ĕF_-yeoݿƏ`j'Q0$0`L=jntR,Kϙ+'Cj5,-((qBu _'_ָ%v66U nzQޝܖw<@ kmxѳb2j+R{]l9Z줲TC y&7P ~(|趍/ʶ`7H 3v!'EÓ[+ڕtB6BDki#-X Ƙ``%Ҋ]E1⫣v^(-J"f)`'?ub `ܼ_|AO0{'/K@qF{y9P<;C3ܱw;{oOxY˭!Tb)d E[C, G0c潅A/]|/Ώ~y}?OftB)ܙI1>=|pd>N^&+6笍a7~ap>7Ĭ IKj!]ӕ~714#ͧ +RwW2G:칿g=iOv?Oqxz {:^xxѳ܅gpϕu_~y>xLvvbp6yJvMK: ҹ`?_:uz4|ӗkĨσ?#3 "7e7.|(B_(\^?gr6s?e_'^zh:wU|w_5r[q]ϚyVOϯ#MkYώ! E#n^g7̯hG|5ü8%0fW}5^D)Z4vswūi)P- 4؛YNWc`^k[y8'w0SJArXwfbnry6P+ί/co`(>5g6ɂ+Ͻ]{e c(*A EZSr*aB&4vd6eBii2p?eҺZh%A(˸O,\s@V}zoϰ'@G+Bf~3eG=瑋uf$QծnfWF"fy9"nEoy;z[^>[:vhQsYe)30t01,K$q=ŀou3h 7q $/} ىi*Ѹ5%M#4!riD .~}zrbE +~{KSl@_Ot@,R0T0|Ǹ. %="`jMj% uhYnozxT* [TE2ś\ck(A,MHKH4 |Ƒjt J,g3JjfdSO'܆UHP*w6&E3Yاc J0b'8gG„:0eIf&1Ʀ ZYp8Yˀr^楅@CQRQa}x_tA)nyG+lÖdCZmPjJ\kaSH#i(Q$`hrʑCPR.{p1 Tk$ CUHbS)^QJs:Ŋ(YJmYΊ}c,KQ7Lf! U |@As.e\M3:S OKAT`F0QZ"w>"i-吔=*GcHQjcGx]aʌe`bZ8 A$ UevZA21u!"_@)L(Fvcp*%3ͥG^ðE'DP fX187 2~r#DhN\=F=]>b"gj,Mu xᤘL1'NLgtb`noT+$悯~9{Ubn`B0T^oN##*&Jw;9Hֆrbka#PdfMm$`5пeiv/oؤJ6%fxm)JcB+ؔMɞMŜ"D`S؈i!6̲EXUJpI^66Ă] aց_5,7)&)m>G$ags, H7SG:Q# TA1bUP8W@M)Bʊy md PLa<\\-ɐ v]1<<S* Ѐ SpNf01WU,.T0M% ȆVV%o7 Q"xL1gQ׭6)1o &GfP0pp݄I%UcM]!Y coMm Ϯ럒ӽ:%OP!Ľ1ɃcHP{= `F'Ģ;4L\P0l!f ڀT;%֍m(BiC=!*4 *Yo|c6&"ʼlKD[ {#NUرضJioMp({ o:BaB6]Z"HMsI)pv )DRZ].2Y*T4.a٘_QisʔAh5+-Ԇ +Š@I% ^d {cŞBx*$SRnFmlw˶gjsٶl\m.6m˶e¾5˶,m.6 ;OmXŷӭ@Di!T)K>|$gW8WP{^ %w'!%xs+p?=ioɱЗP}w x vMASkIGTP"aS"utu%0ws·=9{,Zp>\ B( /?srېPH ,.W)y\Qt ϗ]qzrǤ>eX#D Eȁ,yږ\pӈ7Ӣehlz~&"7YH4(65w:EQXƖZ2lz9G&GJN"h0YB@a3)N2 8>0JQ=FxFD-xeibi B$"7*h(TV5nKn"8w)pn Ҳ@i3j $FnvO~='CHfo=*B؞=\ ! '`REaHѶ;;`2[{?ʠ+l2S;:$1w 0%D4Ym/0$1mR#ƪqƤdAPAV1B;1xEUv$Cʱ 6ҙX(F ,>0jaemiUwQ1{` YDa]z5h\٧|% #g-l(ޮx밂eyBAumZ[L?(_2)#u2J*eRJaR2 ca2YW?{ p-]kkuz$3C=z)^z M/mpkm-ʾFCq6Ax(BHzd:ZP2^]yqz{1!HӧS-T`WibIϝ\hR 8-|V﫹`(}DW`}#wp# StO]H#嗽>`xm=fڈhWS<cj y7*OޅHxwǰ g<8@$4z ި6Ocoq 77tCgi1wZ"?ЫUv3dQP]]ұBV /Dr(m /0x'a.@ ;UQ((3$фP⑍~!V+}{P_cqiITlac@JN3e40thPsb6GhkCz|>5 ȷJo|5:W"R}ĝ6["ƄS U~ʕdfdi@g _mjb{@"bYt&`:xh]tuM~==Z|f V~uOoj/nO(J~zd:?+!vv :,՜wƇv +sR߷@FW- s%luo(DE!:*D`*:N U8N4**oA9\:93v-'wX!x#(mňDX!)O/&/,y&k~%_-j+Fʤ #c4`D2UT)S )Eg62UT)S0)#tA(UBZ5UgkԎC- USjVMժZ5UjTUڷjQ+jTuqT83tA 0KLred 1CäoVtw$Y%"LKku6qqmt폶H/e9mxM;Ooӫl=ﮅ^:ӯO .J,PHEܝm'ӫ&̻y#˛w3/st @gv98>.]ya)DX#Liee9@I xt`t<)s<Y94zPjsp*mDx}z+z -0Kl~,Jv>vB";q D% fJI&<'.(rn WPvjNhU0+l,A^mʨdJY q {4΂*CQR<8JP&FA0aTʨ@m^X+8Em (a%f٠qBPĘ2Ql3(ep(Su9VOGb9Veq>Ǫåsmd 2+!s!GA7!HWVO[C ^u* Ac(`sXsӏ6 jCciêpùTPEv<ʩh2r-GCAF'sJVX; ^B흲1t/oaB>svM>iV)B;46$صq xK(!v#PBd/G4glK`Rpy4gb \pE饙Tn[ Q5ET4yn͆g9"ҞסdaŽJ.$apDoK\=%JW*>y*T 27|? C4PBj. (0bͨVnf3U33>s7 z2 E?6.Ng1!fi;sT]mN{~ )+`L6,H*Q*˲ƨYcJ5VjX5FPTXS-DQpP\3pH" Q ʆHQG(Tjh2d$rHB]W_Nf%tZ=]sQ찧b?xk:e:\߿{{jwOf۷~9:.o^1TChW{~ ^lN{vwP6afS wA4(_S'v~e=$< 4~{uke֯d?+qEcs]4gДG!Hxёy UA4I^$%IJ#AGl"qe̔i ujUCbizI*eRfL\(S]UT)S@)ScƭrRǭ:ɚ:nuOVMa 1F4vg9%9`2܎orZӮ!9y[3hP oޙ=-S N%&fvzM \7J)yC m( 1nN>2sF Fn;Ik3ȞFh(TR9!7Fۿd-y+!oUլfU59@4(T.#q1X@jJ*\}0pb2He#,Q I贴^s׊3AIG^ʎRNC͇Osrېҡa=k"gс>ްضsJ3蟢;#HQv.''*7id<_ RJ[.xR6e(5m.z{ޣQd+8뵬މrgILOu.,N ¥2iybdJlD(Y4";7 ,Vos m@6CwNEqsb&srpIQ5)rFp\z A!`(q B(quDKHJIGPp7Lޭ"Ęu EȰ8bԁa$cLHӡ7WW+~X5FJ m\ {0bJwɭ/!Rݕ܋(?h @mѥ żruC˝%e#`RmӸ5^ 1-sPYJ*߫SΤgEc|vB1"'!IFܧ #0\+]anyV\rEh,OϒG%P3A@]#턈,1q{*dU9}D h  ]\=b)0XK僠+%nZe˩VM`K]jڥviPy˦\U(Me:"upj[C#Ƚj[bA-x 4ouWm_r/mWmkB+dThoF{qV$*zǞbOC/`i1&I޻m  ~Gqa8` Rc X?JKPm?gih|Eb$ X}i-`ԻZMhM# jІy&4l2͖ TSO0O7" ti^yiWZC@4fWBiTR 6xuuWͥFKQʬfml3~5犯ʐz> ?iY0}2qQ#߬ F!='Vn=R# P&{KLo!. ӌ)gXɬ+3W2,XdM( Tٸ~hc698e>>?Y~hq{;1YPaׁhxa8(>p"gnԀ'#F1<"2tLES&RWs]<; 2[xNH`TQrvȷl ݢU5Oʹ6(GPub gg*Jb^돂? piMNAy̜Ė'NJĜ:JVk˱Jf8׌"O123@&ׂd&Y[OFk:Ѡ'bࣤ P.ܷ@FL9{̀ƓopvP$*FP2XY8&R$$1JX9ZD Kj¥(\TZ.@׳*![ tQb9!AtR!=VKXGOY\ 0RLqpDQɹcLb̙v6Q'pX~ "I_(*eؚy>U<3,ȕqV뻖MP\6o;ȲM5@dPƥkBR5 fv88:Ѣ>l\~fmNuT`eg{F8{[~}v-`dIŶ)wc5s-G:֧Ӭt}K4kCe]KG:^ȹJQ bgr>OC*k?:L;0١YLz5Cê12n*&JM1qKMЊȡ}*y*YHA-R^j;rexo&',aT)pxUB˩ c)h̐Jt =Uk]4F`Ȳ,nELnv:ZESwp_ta|r3ܭ;(7Gbe%]8(TcT\AɗіR_KM[DoUXjZ"rxUtwUj!Q1U1t8DN"5x]\O?mXt: fD^s֊"~Z`V!`P\ՂE@7!cd1Mfӓ7Vq\ #Lq{"Ô)zQa|PLb ¨B*IJW&+ܕI$Lr2a0xQH$1 F#e Hs♖-~}imTw.P-7˖g/bsG<`G:A"#:`P5?>gm{uEu>CnXVZ8r„{1fZ}Qc`dX7VM*2TVe:[e:[-CI5-CI_,%[vLz R//7‘G!/ϻw/mOMG߬Gþ zRĥDyҵdr @_!vy`q ^?lp65<03:\X gԏ0]}@~aLN9F~x8/=N{/< sI8#j0G/se O3eW1Gv]7 (} HK5?ʶ,xUEe"kC= C돞_dnd~S0SQS?_gyj2σgoX@X0cOO43sA_? `3?.iX}z*P ~7?YLd#^U_U q ;]N҅V2ŅLe:-izZjZr2NtZ2eel$;-iNtZZ>ӔRȤhjce$ރ~t,4'@8dգV"ol=ƣ3,ުp&?PxMQ{&i@~ 2==UgemBw7 bXq~;t*@.tXJϿuu/0A8ca[1oUVB pZJ P!v>m&,9ױ@;d"pِo5ۗlF)>b &1-*Ld(LFռκEIi;|^XkNrWq\9dEqԆ&$ \9o WU_QviA+0q<6@4K5i9G`qIk1$YXwPރ&NuH Y9k!N`]DŽˢ@4590TRWZ;0Z~DJl8%%t/ɃzXŕE#IFx>b^ro&p%f$^%I]ևsi{1&IdQRg*'gBB2\ (q?$ɋAJTN)RJKrUuQiFw08FGtܛ{Uw60L&|+%K*xéJNj6љ9?m7!vSn*Me)g L^3SB9p192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.152034 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42756->192.168.126.11:17697: read: connection reset by peer" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.152107 5103 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42772->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.152202 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42772->192.168.126.11:17697: read: connection reset by peer" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.464319 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.464549 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.465480 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.465603 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.465671 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:31 crc kubenswrapper[5103]: E0130 00:10:31.466123 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.469213 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.471968 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.613458 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.613993 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.614416 5103 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.614467 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.615136 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.615204 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.615223 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:31 crc kubenswrapper[5103]: E0130 00:10:31.615821 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.618875 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.747882 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.014141 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.017165 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="65363b989b574f53d3f93658c788e813617b988cbde0215f87ecc7dfd9d34caa" exitCode=255 Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.017383 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"65363b989b574f53d3f93658c788e813617b988cbde0215f87ecc7dfd9d34caa"} Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.017436 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.017535 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.018361 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.018402 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.018414 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.018711 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:32 crc kubenswrapper[5103]: E0130 00:10:32.018766 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.018779 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.018800 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:32 crc kubenswrapper[5103]: E0130 00:10:32.019367 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.019924 5103 scope.go:117] "RemoveContainer" containerID="65363b989b574f53d3f93658c788e813617b988cbde0215f87ecc7dfd9d34caa" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.752970 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.021251 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.023157 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b"} Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.023292 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.023483 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.023939 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.023991 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.024011 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.024305 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.024395 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.024428 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:33 crc kubenswrapper[5103]: E0130 00:10:33.024587 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:33 crc kubenswrapper[5103]: E0130 00:10:33.025123 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:33 crc kubenswrapper[5103]: E0130 00:10:33.384540 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.747450 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.028980 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.029746 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.032029 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" exitCode=255 Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.032145 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b"} Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.032303 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.032318 5103 scope.go:117] "RemoveContainer" containerID="65363b989b574f53d3f93658c788e813617b988cbde0215f87ecc7dfd9d34caa" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.033701 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.033742 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.033781 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:34 crc kubenswrapper[5103]: E0130 00:10:34.034157 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.034474 5103 scope.go:117] "RemoveContainer" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" Jan 30 00:10:34 crc kubenswrapper[5103]: E0130 00:10:34.034659 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.747267 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.038453 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.041264 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.042448 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.042494 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.042511 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.043003 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.043369 5103 scope.go:117] "RemoveContainer" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.043596 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.749090 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.920149 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b4983a7ea3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.762849955 +0000 UTC m=+0.634347997,LastTimestamp:2026-01-30 00:10:10.762849955 +0000 UTC m=+0.634347997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.927164 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.934357 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.942464 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.950420 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b4a294443d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.936505405 +0000 UTC m=+0.808003457,LastTimestamp:2026-01-30 00:10:10.936505405 +0000 UTC m=+0.808003457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.956434 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.970181731 +0000 UTC m=+0.841679823,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.963558 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.970219492 +0000 UTC m=+0.841717584,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.971504 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2a035b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.970238933 +0000 UTC m=+0.841737025,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.978506 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.972207241 +0000 UTC m=+0.843705333,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.985538 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.972222282 +0000 UTC m=+0.843720364,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.992691 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.972243102 +0000 UTC m=+0.843741184,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.997972 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.972258553 +0000 UTC m=+0.843756645,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.009609 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2a035b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.972263333 +0000 UTC m=+0.843761415,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.011261 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2a035b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.972278423 +0000 UTC m=+0.843776515,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.017141 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.974378705 +0000 UTC m=+0.845876797,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.022781 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.974400906 +0000 UTC m=+0.845898998,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.029750 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2a035b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.974418466 +0000 UTC m=+0.845916558,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.037094 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.97457215 +0000 UTC m=+0.846070242,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.044705 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.974596371 +0000 UTC m=+0.846094463,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.051029 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2a035b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.974613531 +0000 UTC m=+0.846111613,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.056323 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.976803996 +0000 UTC m=+0.848302068,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.062655 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.976829846 +0000 UTC m=+0.848327908,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.068734 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2a035b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.976846157 +0000 UTC m=+0.848344229,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.073858 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.977113343 +0000 UTC m=+0.848611435,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.079215 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.977177895 +0000 UTC m=+0.848675987,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.089406 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b4bd7928ec openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:11.387713772 +0000 UTC m=+1.259211844,LastTimestamp:2026-01-30 00:10:11.387713772 +0000 UTC m=+1.259211844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.094941 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b4be8972e9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:11.405558505 +0000 UTC m=+1.277056577,LastTimestamp:2026-01-30 00:10:11.405558505 +0000 UTC m=+1.277056577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.100193 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b4bee071d0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:11.411259856 +0000 UTC m=+1.282757928,LastTimestamp:2026-01-30 00:10:11.411259856 +0000 UTC m=+1.282757928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.106851 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b4bf954ddf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:11.423112671 +0000 UTC m=+1.294610733,LastTimestamp:2026-01-30 00:10:11.423112671 +0000 UTC m=+1.294610733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.114860 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4bf9b8029 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:11.423518761 +0000 UTC m=+1.295016843,LastTimestamp:2026-01-30 00:10:11.423518761 +0000 UTC m=+1.295016843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.121339 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4e8373f24 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.104814372 +0000 UTC m=+1.976312424,LastTimestamp:2026-01-30 00:10:12.104814372 +0000 UTC m=+1.976312424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.127093 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b4e838fa32 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.104927794 +0000 UTC m=+1.976425856,LastTimestamp:2026-01-30 00:10:12.104927794 +0000 UTC m=+1.976425856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.132969 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b4e83976bb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.104959675 +0000 UTC m=+1.976457727,LastTimestamp:2026-01-30 00:10:12.104959675 +0000 UTC m=+1.976457727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.138832 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b4e843a7c8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.105627592 +0000 UTC m=+1.977125644,LastTimestamp:2026-01-30 00:10:12.105627592 +0000 UTC m=+1.977125644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.148531 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b4e8a2fad9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.111874777 +0000 UTC m=+1.983372829,LastTimestamp:2026-01-30 00:10:12.111874777 +0000 UTC m=+1.983372829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.154780 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4e9083142 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.118507842 +0000 UTC m=+1.990005894,LastTimestamp:2026-01-30 00:10:12.118507842 +0000 UTC m=+1.990005894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.162086 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4e91b62e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.119765733 +0000 UTC m=+1.991263785,LastTimestamp:2026-01-30 00:10:12.119765733 +0000 UTC m=+1.991263785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.168333 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b4e921b73f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.120180543 +0000 UTC m=+1.991678595,LastTimestamp:2026-01-30 00:10:12.120180543 +0000 UTC m=+1.991678595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.175438 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b4e9260c2e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.12046443 +0000 UTC m=+1.991962492,LastTimestamp:2026-01-30 00:10:12.12046443 +0000 UTC m=+1.991962492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.182134 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b4e92d34a6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.120933542 +0000 UTC m=+1.992431604,LastTimestamp:2026-01-30 00:10:12.120933542 +0000 UTC m=+1.992431604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.189491 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b4e9b6f3b6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.129960886 +0000 UTC m=+2.001458938,LastTimestamp:2026-01-30 00:10:12.129960886 +0000 UTC m=+2.001458938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.195446 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4fc500300 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.441981696 +0000 UTC m=+2.313479788,LastTimestamp:2026-01-30 00:10:12.441981696 +0000 UTC m=+2.313479788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.203245 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4fd49a5dc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.458341852 +0000 UTC m=+2.329839944,LastTimestamp:2026-01-30 00:10:12.458341852 +0000 UTC m=+2.329839944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.208713 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4fd645444 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.460090436 +0000 UTC m=+2.331588528,LastTimestamp:2026-01-30 00:10:12.460090436 +0000 UTC m=+2.331588528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.215024 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b51702e6e4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.88991306 +0000 UTC m=+2.761411142,LastTimestamp:2026-01-30 00:10:12.88991306 +0000 UTC m=+2.761411142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.220524 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b517328668 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.893034088 +0000 UTC m=+2.764532140,LastTimestamp:2026-01-30 00:10:12.893034088 +0000 UTC m=+2.764532140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.222962 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b51759fd82 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.895620482 +0000 UTC m=+2.767118534,LastTimestamp:2026-01-30 00:10:12.895620482 +0000 UTC m=+2.767118534,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.230256 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b517821137 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.898246967 +0000 UTC m=+2.769745029,LastTimestamp:2026-01-30 00:10:12.898246967 +0000 UTC m=+2.769745029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.236286 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b52de806dd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.274027741 +0000 UTC m=+3.145525803,LastTimestamp:2026-01-30 00:10:13.274027741 +0000 UTC m=+3.145525803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.243387 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b52df8ab52 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.275118418 +0000 UTC m=+3.146616480,LastTimestamp:2026-01-30 00:10:13.275118418 +0000 UTC m=+3.146616480,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.248451 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b52df9d6be openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.27519507 +0000 UTC m=+3.146693132,LastTimestamp:2026-01-30 00:10:13.27519507 +0000 UTC m=+3.146693132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.256118 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b52e02f091 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.275791505 +0000 UTC m=+3.147289567,LastTimestamp:2026-01-30 00:10:13.275791505 +0000 UTC m=+3.147289567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.263182 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b52e076a92 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.276084882 +0000 UTC m=+3.147582944,LastTimestamp:2026-01-30 00:10:13.276084882 +0000 UTC m=+3.147582944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.270436 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b52f95b8d9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.302188249 +0000 UTC m=+3.173686311,LastTimestamp:2026-01-30 00:10:13.302188249 +0000 UTC m=+3.173686311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.277336 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b52fa69c33 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.303295027 +0000 UTC m=+3.174793089,LastTimestamp:2026-01-30 00:10:13.303295027 +0000 UTC m=+3.174793089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.282139 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b52fd44051 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.306286161 +0000 UTC m=+3.177784223,LastTimestamp:2026-01-30 00:10:13.306286161 +0000 UTC m=+3.177784223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.289754 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b52fe11975 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.307128181 +0000 UTC m=+3.178626243,LastTimestamp:2026-01-30 00:10:13.307128181 +0000 UTC m=+3.178626243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.297165 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b52fe93695 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.307659925 +0000 UTC m=+3.179157977,LastTimestamp:2026-01-30 00:10:13.307659925 +0000 UTC m=+3.179157977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.304032 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b52fed3d9f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.307923871 +0000 UTC m=+3.179421933,LastTimestamp:2026-01-30 00:10:13.307923871 +0000 UTC m=+3.179421933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.310522 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b52ff86853 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.308655699 +0000 UTC m=+3.180153761,LastTimestamp:2026-01-30 00:10:13.308655699 +0000 UTC m=+3.180153761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.317806 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5315c1f35 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.331967797 +0000 UTC m=+3.203465859,LastTimestamp:2026-01-30 00:10:13.331967797 +0000 UTC m=+3.203465859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.325312 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b53e7847ef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.551917039 +0000 UTC m=+3.423415091,LastTimestamp:2026-01-30 00:10:13.551917039 +0000 UTC m=+3.423415091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.329658 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b53e789174 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.55193586 +0000 UTC m=+3.423433912,LastTimestamp:2026-01-30 00:10:13.55193586 +0000 UTC m=+3.423433912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.333216 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b53f79d049 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.568794697 +0000 UTC m=+3.440292759,LastTimestamp:2026-01-30 00:10:13.568794697 +0000 UTC m=+3.440292759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.338007 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b53f9d3415 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.571114005 +0000 UTC m=+3.442612057,LastTimestamp:2026-01-30 00:10:13.571114005 +0000 UTC m=+3.442612057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.341156 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b53fa1b992 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.571410322 +0000 UTC m=+3.442908384,LastTimestamp:2026-01-30 00:10:13.571410322 +0000 UTC m=+3.442908384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.345520 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b53fb011f7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.572350455 +0000 UTC m=+3.443848507,LastTimestamp:2026-01-30 00:10:13.572350455 +0000 UTC m=+3.443848507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.351837 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b54134fa35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.597837877 +0000 UTC m=+3.469335929,LastTimestamp:2026-01-30 00:10:13.597837877 +0000 UTC m=+3.469335929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.357196 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b5414848a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.599103139 +0000 UTC m=+3.470601191,LastTimestamp:2026-01-30 00:10:13.599103139 +0000 UTC m=+3.470601191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.363729 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b54d724d25 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.803183397 +0000 UTC m=+3.674681449,LastTimestamp:2026-01-30 00:10:13.803183397 +0000 UTC m=+3.674681449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.370641 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b54d93a4d3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.805368531 +0000 UTC m=+3.676866583,LastTimestamp:2026-01-30 00:10:13.805368531 +0000 UTC m=+3.676866583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.378679 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b54e6fd1f9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.819798009 +0000 UTC m=+3.691296061,LastTimestamp:2026-01-30 00:10:13.819798009 +0000 UTC m=+3.691296061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.386421 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b54eaa18db openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.823617243 +0000 UTC m=+3.695115285,LastTimestamp:2026-01-30 00:10:13.823617243 +0000 UTC m=+3.695115285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.393896 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b54ec37a7b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.825280635 +0000 UTC m=+3.696778687,LastTimestamp:2026-01-30 00:10:13.825280635 +0000 UTC m=+3.696778687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.403077 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b554529798 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.918545816 +0000 UTC m=+3.790043868,LastTimestamp:2026-01-30 00:10:13.918545816 +0000 UTC m=+3.790043868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.410582 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b55c060bb7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.047746999 +0000 UTC m=+3.919245051,LastTimestamp:2026-01-30 00:10:14.047746999 +0000 UTC m=+3.919245051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.419842 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b55d4fdeb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.069362355 +0000 UTC m=+3.940860407,LastTimestamp:2026-01-30 00:10:14.069362355 +0000 UTC m=+3.940860407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.425754 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b55d68c6d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.070994645 +0000 UTC m=+3.942492697,LastTimestamp:2026-01-30 00:10:14.070994645 +0000 UTC m=+3.942492697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.432175 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b56219f89e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.149716126 +0000 UTC m=+4.021214178,LastTimestamp:2026-01-30 00:10:14.149716126 +0000 UTC m=+4.021214178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.433519 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5636c0001 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.171869185 +0000 UTC m=+4.043367227,LastTimestamp:2026-01-30 00:10:14.171869185 +0000 UTC m=+4.043367227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.439969 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b56c34dc06 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.319250438 +0000 UTC m=+4.190748490,LastTimestamp:2026-01-30 00:10:14.319250438 +0000 UTC m=+4.190748490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.447381 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b56d089716 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.333126422 +0000 UTC m=+4.204624474,LastTimestamp:2026-01-30 00:10:14.333126422 +0000 UTC m=+4.204624474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.455578 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b59120b03e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.938685502 +0000 UTC m=+4.810183584,LastTimestamp:2026-01-30 00:10:14.938685502 +0000 UTC m=+4.810183584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.461352 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5a2b16a4a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.233382986 +0000 UTC m=+5.104881038,LastTimestamp:2026-01-30 00:10:15.233382986 +0000 UTC m=+5.104881038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.468348 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5a3943013 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.248244755 +0000 UTC m=+5.119742797,LastTimestamp:2026-01-30 00:10:15.248244755 +0000 UTC m=+5.119742797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.474752 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5a3a789e8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.249512936 +0000 UTC m=+5.121010988,LastTimestamp:2026-01-30 00:10:15.249512936 +0000 UTC m=+5.121010988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.481909 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5b40834d6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.524283606 +0000 UTC m=+5.395781698,LastTimestamp:2026-01-30 00:10:15.524283606 +0000 UTC m=+5.395781698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.489295 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5b50512e6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.540855526 +0000 UTC m=+5.412353618,LastTimestamp:2026-01-30 00:10:15.540855526 +0000 UTC m=+5.412353618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.495353 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5b51af0da openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.542288602 +0000 UTC m=+5.413786694,LastTimestamp:2026-01-30 00:10:15.542288602 +0000 UTC m=+5.413786694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.504873 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5c4c5c5eb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.805142507 +0000 UTC m=+5.676640609,LastTimestamp:2026-01-30 00:10:15.805142507 +0000 UTC m=+5.676640609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.512286 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5c5bd4e67 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.821364839 +0000 UTC m=+5.692862941,LastTimestamp:2026-01-30 00:10:15.821364839 +0000 UTC m=+5.692862941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.519509 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5c5d59099 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.822954649 +0000 UTC m=+5.694452741,LastTimestamp:2026-01-30 00:10:15.822954649 +0000 UTC m=+5.694452741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.527295 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5d5f95ff6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:16.09373695 +0000 UTC m=+5.965235022,LastTimestamp:2026-01-30 00:10:16.09373695 +0000 UTC m=+5.965235022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.538329 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5d6d2d14a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:16.107987274 +0000 UTC m=+5.979485336,LastTimestamp:2026-01-30 00:10:16.107987274 +0000 UTC m=+5.979485336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.545341 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5d6ee5e6e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:16.109792878 +0000 UTC m=+5.981290940,LastTimestamp:2026-01-30 00:10:16.109792878 +0000 UTC m=+5.981290940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.552793 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5e4b4ce79 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:16.340901497 +0000 UTC m=+6.212399549,LastTimestamp:2026-01-30 00:10:16.340901497 +0000 UTC m=+6.212399549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.560014 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5e5c29605 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:16.358581765 +0000 UTC m=+6.230079807,LastTimestamp:2026-01-30 00:10:16.358581765 +0000 UTC m=+6.230079807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.570339 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-controller-manager-crc.188f59b7c89b1d7b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 00:10:36 crc kubenswrapper[5103]: body: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.459390331 +0000 UTC m=+14.330888423,LastTimestamp:2026-01-30 00:10:24.459390331 +0000 UTC m=+14.330888423,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.578029 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b7c89d04d6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.459515094 +0000 UTC m=+14.331013176,LastTimestamp:2026-01-30 00:10:24.459515094 +0000 UTC m=+14.331013176,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.585447 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b828a5f55a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 30 00:10:36 crc kubenswrapper[5103]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:10:36 crc kubenswrapper[5103]: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.07071369 +0000 UTC m=+15.942211782,LastTimestamp:2026-01-30 00:10:26.07071369 +0000 UTC m=+15.942211782,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.592745 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b828a70725 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.070783781 +0000 UTC m=+15.942281873,LastTimestamp:2026-01-30 00:10:26.070783781 +0000 UTC m=+15.942281873,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.599981 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b828a5f55a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b828a5f55a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 30 00:10:36 crc kubenswrapper[5103]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:10:36 crc kubenswrapper[5103]: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.07071369 +0000 UTC m=+15.942211782,LastTimestamp:2026-01-30 00:10:26.081045266 +0000 UTC m=+15.952543358,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.608208 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b828a70725\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b828a70725 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.070783781 +0000 UTC m=+15.942281873,LastTimestamp:2026-01-30 00:10:26.081164359 +0000 UTC m=+15.952662451,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.617842 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b84999afeb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Jan 30 00:10:36 crc kubenswrapper[5103]: body: [+]ping ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]log ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]etcd ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-filter ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-apiextensions-informers ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-apiextensions-controllers ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/crd-informer-synced ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-system-namespaces-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 30 00:10:36 crc kubenswrapper[5103]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 30 00:10:36 crc kubenswrapper[5103]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/bootstrap-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-kube-aggregator-informers ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-registration-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-discovery-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]autoregister-completion ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-openapi-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: livez check failed Jan 30 00:10:36 crc kubenswrapper[5103]: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.623557611 +0000 UTC m=+16.495055713,LastTimestamp:2026-01-30 00:10:26.623557611 +0000 UTC m=+16.495055713,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.626480 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8499d0e67 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.623778407 +0000 UTC m=+16.495276499,LastTimestamp:2026-01-30 00:10:26.623778407 +0000 UTC m=+16.495276499,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.634756 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b957844c12 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:42756->192.168.126.11:17697: read: connection reset by peer Jan 30 00:10:36 crc kubenswrapper[5103]: body: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:31.152004114 +0000 UTC m=+21.023502166,LastTimestamp:2026-01-30 00:10:31.152004114 +0000 UTC m=+21.023502166,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.645321 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b957856fd6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42756->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:31.152078806 +0000 UTC m=+21.023576858,LastTimestamp:2026-01-30 00:10:31.152078806 +0000 UTC m=+21.023576858,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.653531 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b95786e976 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:42772->192.168.126.11:17697: read: connection reset by peer Jan 30 00:10:36 crc kubenswrapper[5103]: body: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:31.152175478 +0000 UTC m=+21.023673540,LastTimestamp:2026-01-30 00:10:31.152175478 +0000 UTC m=+21.023673540,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.661409 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b95788079a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42772->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:31.15224873 +0000 UTC m=+21.023746792,LastTimestamp:2026-01-30 00:10:31.15224873 +0000 UTC m=+21.023746792,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.668666 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b97314abd6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 30 00:10:36 crc kubenswrapper[5103]: body: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:31.614450646 +0000 UTC m=+21.485948708,LastTimestamp:2026-01-30 00:10:31.614450646 +0000 UTC m=+21.485948708,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.675954 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b973154c31 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:31.614491697 +0000 UTC m=+21.485989749,LastTimestamp:2026-01-30 00:10:31.614491697 +0000 UTC m=+21.485989749,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.684470 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b55d68c6d5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b55d68c6d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.070994645 +0000 UTC m=+3.942492697,LastTimestamp:2026-01-30 00:10:32.021964627 +0000 UTC m=+21.893462679,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.692450 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b56c34dc06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b56c34dc06 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.319250438 +0000 UTC m=+4.190748490,LastTimestamp:2026-01-30 00:10:32.244499073 +0000 UTC m=+22.115997145,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.698713 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b56d089716\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b56d089716 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.333126422 +0000 UTC m=+4.204624474,LastTimestamp:2026-01-30 00:10:32.256814808 +0000 UTC m=+22.128312850,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.706522 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.714255 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59ba0355b34a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:10:35.04356059 +0000 UTC m=+24.915058652,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: I0130 00:10:36.747874 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.494941 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.496481 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.496562 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.496582 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.496628 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:37 crc kubenswrapper[5103]: E0130 00:10:37.507816 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.749086 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.836436 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.836776 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.838178 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.838276 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.838290 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:37 crc kubenswrapper[5103]: E0130 00:10:37.838857 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.839271 5103 scope.go:117] "RemoveContainer" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" Jan 30 00:10:37 crc kubenswrapper[5103]: E0130 00:10:37.839512 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:37 crc kubenswrapper[5103]: E0130 00:10:37.847767 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59ba0355b34a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:10:37.839474018 +0000 UTC m=+27.710972080,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:38 crc kubenswrapper[5103]: E0130 00:10:38.016710 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:38 crc kubenswrapper[5103]: I0130 00:10:38.748509 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:39 crc kubenswrapper[5103]: I0130 00:10:39.747596 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:40 crc kubenswrapper[5103]: E0130 00:10:40.068874 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:40 crc kubenswrapper[5103]: E0130 00:10:40.329643 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:40 crc kubenswrapper[5103]: E0130 00:10:40.393632 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:10:40 crc kubenswrapper[5103]: I0130 00:10:40.748573 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:40 crc kubenswrapper[5103]: E0130 00:10:40.940316 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:41 crc kubenswrapper[5103]: I0130 00:10:41.749322 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:42 crc kubenswrapper[5103]: E0130 00:10:42.275866 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:42 crc kubenswrapper[5103]: I0130 00:10:42.748404 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.023988 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.024669 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.026242 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.026338 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.026358 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:43 crc kubenswrapper[5103]: E0130 00:10:43.027117 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.027640 5103 scope.go:117] "RemoveContainer" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" Jan 30 00:10:43 crc kubenswrapper[5103]: E0130 00:10:43.028041 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:43 crc kubenswrapper[5103]: E0130 00:10:43.036465 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59ba0355b34a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:10:43.027976599 +0000 UTC m=+32.899474691,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.748581 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:44 crc kubenswrapper[5103]: I0130 00:10:44.508616 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:44 crc kubenswrapper[5103]: I0130 00:10:44.510321 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:44 crc kubenswrapper[5103]: I0130 00:10:44.510387 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:44 crc kubenswrapper[5103]: I0130 00:10:44.510407 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:44 crc kubenswrapper[5103]: I0130 00:10:44.510443 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:44 crc kubenswrapper[5103]: E0130 00:10:44.528772 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:44 crc kubenswrapper[5103]: I0130 00:10:44.748395 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:45 crc kubenswrapper[5103]: I0130 00:10:45.748919 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:46 crc kubenswrapper[5103]: I0130 00:10:46.748545 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:47 crc kubenswrapper[5103]: E0130 00:10:47.401999 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:10:47 crc kubenswrapper[5103]: I0130 00:10:47.748898 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:48 crc kubenswrapper[5103]: I0130 00:10:48.748298 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:49 crc kubenswrapper[5103]: I0130 00:10:49.748885 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:50 crc kubenswrapper[5103]: I0130 00:10:50.749529 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:50 crc kubenswrapper[5103]: E0130 00:10:50.941362 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:51 crc kubenswrapper[5103]: I0130 00:10:51.529102 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:51 crc kubenswrapper[5103]: I0130 00:10:51.530416 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:51 crc kubenswrapper[5103]: I0130 00:10:51.530464 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:51 crc kubenswrapper[5103]: I0130 00:10:51.530483 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:51 crc kubenswrapper[5103]: I0130 00:10:51.530521 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:51 crc kubenswrapper[5103]: E0130 00:10:51.544371 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:51 crc kubenswrapper[5103]: I0130 00:10:51.747764 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:52 crc kubenswrapper[5103]: I0130 00:10:52.749394 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:53 crc kubenswrapper[5103]: I0130 00:10:53.748808 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:54 crc kubenswrapper[5103]: E0130 00:10:54.410912 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:10:54 crc kubenswrapper[5103]: I0130 00:10:54.747882 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:55 crc kubenswrapper[5103]: I0130 00:10:55.747619 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:55 crc kubenswrapper[5103]: I0130 00:10:55.867758 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:55 crc kubenswrapper[5103]: I0130 00:10:55.869185 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:55 crc kubenswrapper[5103]: I0130 00:10:55.869264 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:55 crc kubenswrapper[5103]: I0130 00:10:55.869284 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:55 crc kubenswrapper[5103]: E0130 00:10:55.869895 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:55 crc kubenswrapper[5103]: I0130 00:10:55.870438 5103 scope.go:117] "RemoveContainer" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" Jan 30 00:10:55 crc kubenswrapper[5103]: E0130 00:10:55.880399 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b55d68c6d5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b55d68c6d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.070994645 +0000 UTC m=+3.942492697,LastTimestamp:2026-01-30 00:10:55.872248784 +0000 UTC m=+45.743746876,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:56 crc kubenswrapper[5103]: E0130 00:10:56.097144 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b56c34dc06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b56c34dc06 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.319250438 +0000 UTC m=+4.190748490,LastTimestamp:2026-01-30 00:10:56.091537438 +0000 UTC m=+45.963035520,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:56 crc kubenswrapper[5103]: I0130 00:10:56.105615 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:10:56 crc kubenswrapper[5103]: I0130 00:10:56.108511 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915"} Jan 30 00:10:56 crc kubenswrapper[5103]: E0130 00:10:56.109313 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b56d089716\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b56d089716 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.333126422 +0000 UTC m=+4.204624474,LastTimestamp:2026-01-30 00:10:56.104385141 +0000 UTC m=+45.975883193,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:56 crc kubenswrapper[5103]: I0130 00:10:56.748821 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:56 crc kubenswrapper[5103]: E0130 00:10:56.759995 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:57 crc kubenswrapper[5103]: E0130 00:10:57.083970 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:57 crc kubenswrapper[5103]: I0130 00:10:57.110339 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:57 crc kubenswrapper[5103]: I0130 00:10:57.110999 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:57 crc kubenswrapper[5103]: I0130 00:10:57.111076 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:57 crc kubenswrapper[5103]: I0130 00:10:57.111090 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:57 crc kubenswrapper[5103]: E0130 00:10:57.111518 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:57 crc kubenswrapper[5103]: I0130 00:10:57.748650 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.115480 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.116746 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.119587 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915" exitCode=255 Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.119659 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915"} Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.119720 5103 scope.go:117] "RemoveContainer" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.120010 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.120850 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.120916 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.120941 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:58 crc kubenswrapper[5103]: E0130 00:10:58.121651 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.122514 5103 scope.go:117] "RemoveContainer" containerID="4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915" Jan 30 00:10:58 crc kubenswrapper[5103]: E0130 00:10:58.122935 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:58 crc kubenswrapper[5103]: E0130 00:10:58.131467 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59ba0355b34a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:10:58.12286286 +0000 UTC m=+47.994360952,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.544703 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.546014 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.546261 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.546453 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.546602 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:58 crc kubenswrapper[5103]: E0130 00:10:58.561924 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.749181 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:58 crc kubenswrapper[5103]: E0130 00:10:58.884129 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:59 crc kubenswrapper[5103]: I0130 00:10:59.126243 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:10:59 crc kubenswrapper[5103]: I0130 00:10:59.747723 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:00 crc kubenswrapper[5103]: I0130 00:11:00.748201 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:00 crc kubenswrapper[5103]: E0130 00:11:00.942188 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:01 crc kubenswrapper[5103]: E0130 00:11:01.419500 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:01 crc kubenswrapper[5103]: I0130 00:11:01.748279 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:02 crc kubenswrapper[5103]: I0130 00:11:02.750880 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:03 crc kubenswrapper[5103]: E0130 00:11:03.533442 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:11:03 crc kubenswrapper[5103]: I0130 00:11:03.747883 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:04 crc kubenswrapper[5103]: I0130 00:11:04.747016 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.562358 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.565004 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.565081 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.565097 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.565134 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:05 crc kubenswrapper[5103]: E0130 00:11:05.579771 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.747130 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.951210 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.951536 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.952449 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.952504 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.952515 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:05 crc kubenswrapper[5103]: E0130 00:11:05.952893 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:06 crc kubenswrapper[5103]: I0130 00:11:06.744810 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.110769 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.111679 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.112871 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.112954 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.112966 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:07 crc kubenswrapper[5103]: E0130 00:11:07.113457 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.113754 5103 scope.go:117] "RemoveContainer" containerID="4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915" Jan 30 00:11:07 crc kubenswrapper[5103]: E0130 00:11:07.114040 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:07 crc kubenswrapper[5103]: E0130 00:11:07.121978 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59ba0355b34a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:11:07.114005336 +0000 UTC m=+56.985503388,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.747035 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.836030 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.836463 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.837744 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.837810 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.837830 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:07 crc kubenswrapper[5103]: E0130 00:11:07.838657 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.839207 5103 scope.go:117] "RemoveContainer" containerID="4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915" Jan 30 00:11:07 crc kubenswrapper[5103]: E0130 00:11:07.839568 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:07 crc kubenswrapper[5103]: E0130 00:11:07.846706 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59ba0355b34a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:11:07.839508578 +0000 UTC m=+57.711006660,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:08 crc kubenswrapper[5103]: E0130 00:11:08.428578 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:08 crc kubenswrapper[5103]: I0130 00:11:08.746340 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:09 crc kubenswrapper[5103]: I0130 00:11:09.749271 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:10 crc kubenswrapper[5103]: I0130 00:11:10.748456 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:10 crc kubenswrapper[5103]: E0130 00:11:10.943416 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:11 crc kubenswrapper[5103]: I0130 00:11:11.746982 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:12 crc kubenswrapper[5103]: I0130 00:11:12.580417 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:12 crc kubenswrapper[5103]: I0130 00:11:12.582770 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:12 crc kubenswrapper[5103]: I0130 00:11:12.582985 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:12 crc kubenswrapper[5103]: I0130 00:11:12.583209 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:12 crc kubenswrapper[5103]: I0130 00:11:12.583413 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:12 crc kubenswrapper[5103]: E0130 00:11:12.600373 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:12 crc kubenswrapper[5103]: I0130 00:11:12.747140 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:13 crc kubenswrapper[5103]: I0130 00:11:13.745999 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:14 crc kubenswrapper[5103]: I0130 00:11:14.742760 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:15 crc kubenswrapper[5103]: E0130 00:11:15.433727 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:15 crc kubenswrapper[5103]: I0130 00:11:15.746441 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:16 crc kubenswrapper[5103]: I0130 00:11:16.748319 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:17 crc kubenswrapper[5103]: I0130 00:11:17.745714 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:18 crc kubenswrapper[5103]: I0130 00:11:18.748720 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:18 crc kubenswrapper[5103]: I0130 00:11:18.868229 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:18 crc kubenswrapper[5103]: I0130 00:11:18.870201 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:18 crc kubenswrapper[5103]: I0130 00:11:18.870248 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:18 crc kubenswrapper[5103]: I0130 00:11:18.870261 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:18 crc kubenswrapper[5103]: E0130 00:11:18.870540 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.049715 5103 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-dpv7j" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.057021 5103 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-dpv7j" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.142865 5103 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.570140 5103 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.600876 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.602266 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.602399 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.602487 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.602701 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.614697 5103 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.615018 5103 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.615036 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.618888 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.619009 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.619139 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.619273 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.619393 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:19Z","lastTransitionTime":"2026-01-30T00:11:19Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.635229 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.643148 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.643198 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.643208 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.643227 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.643238 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:19Z","lastTransitionTime":"2026-01-30T00:11:19Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.653549 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.661983 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.662032 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.662064 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.662086 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.662099 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:19Z","lastTransitionTime":"2026-01-30T00:11:19Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.672702 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.679681 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.679721 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.679733 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.679753 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.679767 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:19Z","lastTransitionTime":"2026-01-30T00:11:19Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.691327 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.691519 5103 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.691545 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.791962 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.892279 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.993427 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: I0130 00:11:20.058951 5103 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-01 00:06:19 +0000 UTC" deadline="2026-02-25 23:44:29.01549579 +0000 UTC" Jan 30 00:11:20 crc kubenswrapper[5103]: I0130 00:11:20.058999 5103 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="647h33m8.956500012s" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.093930 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.194214 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.295169 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.396269 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.497336 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.597722 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.698034 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.798589 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.899578 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.944452 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.000284 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.100352 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.201081 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.301518 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.401661 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.502470 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.603481 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.704129 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.804839 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.905764 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.006280 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.107247 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.207502 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.307973 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.408378 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.508488 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.609594 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.710659 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.811383 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: I0130 00:11:22.868259 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:22 crc kubenswrapper[5103]: I0130 00:11:22.869444 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:22 crc kubenswrapper[5103]: I0130 00:11:22.869517 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:22 crc kubenswrapper[5103]: I0130 00:11:22.869540 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.870201 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:22 crc kubenswrapper[5103]: I0130 00:11:22.870703 5103 scope.go:117] "RemoveContainer" containerID="4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.917019 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.017981 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.118965 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.219223 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.320249 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.421449 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.522401 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.622910 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.723119 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.824258 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.924717 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.024880 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.125262 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: I0130 00:11:24.216354 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:11:24 crc kubenswrapper[5103]: I0130 00:11:24.217941 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b"} Jan 30 00:11:24 crc kubenswrapper[5103]: I0130 00:11:24.218157 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:24 crc kubenswrapper[5103]: I0130 00:11:24.218711 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:24 crc kubenswrapper[5103]: I0130 00:11:24.218744 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:24 crc kubenswrapper[5103]: I0130 00:11:24.218755 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.219144 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.225976 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.326509 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.426763 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.527941 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.628397 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.728904 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.829394 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.930136 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.030723 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.131759 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.222652 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.223258 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.225495 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" exitCode=255 Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.225555 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b"} Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.225598 5103 scope.go:117] "RemoveContainer" containerID="4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.225787 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.226504 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.226537 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.226550 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.227030 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.227293 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.227571 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.232209 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.332525 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.432887 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.533338 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.633477 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.734586 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.835252 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.936083 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.036829 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.137293 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.180100 5103 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.229728 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.239318 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.239385 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.239402 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.239419 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.239432 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.273709 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.290910 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.341667 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.341718 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.341731 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.341753 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.341766 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.389710 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.444477 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.444559 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.444579 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.444607 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.444628 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.490248 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.546880 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.546929 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.546944 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.546981 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.546992 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.590332 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.656810 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.656851 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.656863 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.656879 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.656890 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.759296 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.759349 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.759387 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.759413 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.759463 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.765934 5103 apiserver.go:52] "Watching apiserver" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.773789 5103 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.774441 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-dns/node-resolver-bs8rz","openshift-multus/multus-additional-cni-plugins-6tmbq","openshift-multus/multus-swfns","openshift-multus/network-metrics-daemon-vsrcq","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6","openshift-image-registry/node-ca-226mj","openshift-machine-config-operator/machine-config-daemon-6g6hp","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-node-identity/network-node-identity-dgvkt","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-8lwjf","openshift-etcd/etcd-crc"] Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.775677 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.776300 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.776442 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.777770 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.777848 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.777982 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.779714 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.779810 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.780812 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.784100 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.784293 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.784758 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.785152 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.785357 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.785546 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.786907 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.795329 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.806135 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.826568 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.837038 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.845880 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.856903 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.862741 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.862796 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.862810 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.862829 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.862841 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.867331 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.898914 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899018 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.899173 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899227 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-hosts-file\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.899280 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:27.399259691 +0000 UTC m=+77.270757733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899312 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899347 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-tmp-dir\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899382 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899403 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899434 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899495 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899522 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899551 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899579 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899604 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899652 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899703 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899741 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89lmd\" (UniqueName: \"kubernetes.io/projected/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-kube-api-access-89lmd\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899781 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.900327 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.900403 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:27.400388238 +0000 UTC m=+77.271886290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.915020 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.915089 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.915105 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.915246 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:27.415222128 +0000 UTC m=+77.286720180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.934671 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.935001 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.935123 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.935303 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:27.435278365 +0000 UTC m=+77.306776427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.965908 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.966323 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.966446 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.966546 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.966644 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.971981 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.972798 5103 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.974231 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.977366 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.980007 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.980117 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.980342 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.981242 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.981680 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.000268 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.000587 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-89lmd\" (UniqueName: \"kubernetes.io/projected/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-kube-api-access-89lmd\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.000706 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-hosts-file\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.000802 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-tmp-dir\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.000903 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.001198 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.001222 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-hosts-file\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.001232 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.001721 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-tmp-dir\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.019001 5103 projected.go:289] Couldn't get configMap openshift-dns/kube-root-ca.crt: object "openshift-dns"/"kube-root-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.019033 5103 projected.go:289] Couldn't get configMap openshift-dns/openshift-service-ca.crt: object "openshift-dns"/"openshift-service-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.019067 5103 projected.go:194] Error preparing data for projected volume kube-api-access-89lmd for pod openshift-dns/node-resolver-bs8rz: [object "openshift-dns"/"kube-root-ca.crt" not registered, object "openshift-dns"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.019149 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-kube-api-access-89lmd podName:ef3f9074-af3f-43f4-ad74-efe1ba4abc8e nodeName:}" failed. No retries permitted until 2026-01-30 00:11:27.519125711 +0000 UTC m=+77.390623763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-89lmd" (UniqueName: "kubernetes.io/projected/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-kube-api-access-89lmd") pod "node-resolver-bs8rz" (UID: "ef3f9074-af3f-43f4-ad74-efe1ba4abc8e") : [object "openshift-dns"/"kube-root-ca.crt" not registered, object "openshift-dns"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.069443 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.069489 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.069500 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.069515 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.069525 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.094682 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.104709 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:27 crc kubenswrapper[5103]: W0130 00:11:27.115543 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-66f54f17bd1778a867b5d05e7ea42192333e01fca48ef1d056193b6b41ff0669 WatchSource:0}: Error finding container 66f54f17bd1778a867b5d05e7ea42192333e01fca48ef1d056193b6b41ff0669: Status 404 returned error can't find the container with id 66f54f17bd1778a867b5d05e7ea42192333e01fca48ef1d056193b6b41ff0669 Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.119533 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:27 crc kubenswrapper[5103]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:27 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:27 crc kubenswrapper[5103]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 30 00:11:27 crc kubenswrapper[5103]: source /etc/kubernetes/apiserver-url.env Jan 30 00:11:27 crc kubenswrapper[5103]: else Jan 30 00:11:27 crc kubenswrapper[5103]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 30 00:11:27 crc kubenswrapper[5103]: exit 1 Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 30 00:11:27 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:27 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.120703 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.172095 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.172171 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.172184 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.172201 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.172213 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.208145 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:27 crc kubenswrapper[5103]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:27 crc kubenswrapper[5103]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:27 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:27 crc kubenswrapper[5103]: source "/env/_master" Jan 30 00:11:27 crc kubenswrapper[5103]: set +o allexport Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 30 00:11:27 crc kubenswrapper[5103]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 30 00:11:27 crc kubenswrapper[5103]: ho_enable="--enable-hybrid-overlay" Jan 30 00:11:27 crc kubenswrapper[5103]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 30 00:11:27 crc kubenswrapper[5103]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 30 00:11:27 crc kubenswrapper[5103]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 30 00:11:27 crc kubenswrapper[5103]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:11:27 crc kubenswrapper[5103]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 30 00:11:27 crc kubenswrapper[5103]: --webhook-host=127.0.0.1 \ Jan 30 00:11:27 crc kubenswrapper[5103]: --webhook-port=9743 \ Jan 30 00:11:27 crc kubenswrapper[5103]: ${ho_enable} \ Jan 30 00:11:27 crc kubenswrapper[5103]: --enable-interconnect \ Jan 30 00:11:27 crc kubenswrapper[5103]: --disable-approver \ Jan 30 00:11:27 crc kubenswrapper[5103]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 30 00:11:27 crc kubenswrapper[5103]: --wait-for-kubernetes-api=200s \ Jan 30 00:11:27 crc kubenswrapper[5103]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 30 00:11:27 crc kubenswrapper[5103]: --loglevel="${LOGLEVEL}" Jan 30 00:11:27 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:27 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.210713 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:27 crc kubenswrapper[5103]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:27 crc kubenswrapper[5103]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:27 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:27 crc kubenswrapper[5103]: source "/env/_master" Jan 30 00:11:27 crc kubenswrapper[5103]: set +o allexport Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 30 00:11:27 crc kubenswrapper[5103]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:11:27 crc kubenswrapper[5103]: --disable-webhook \ Jan 30 00:11:27 crc kubenswrapper[5103]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 30 00:11:27 crc kubenswrapper[5103]: --loglevel="${LOGLEVEL}" Jan 30 00:11:27 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:27 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.211915 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.273139 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.274420 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.274560 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.274592 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.274618 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.274637 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: W0130 00:11:27.286039 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-facedfe7e0e5c2c71cd8e1a3860238e99b275e537057c358365f36e3730d7115 WatchSource:0}: Error finding container facedfe7e0e5c2c71cd8e1a3860238e99b275e537057c358365f36e3730d7115: Status 404 returned error can't find the container with id facedfe7e0e5c2c71cd8e1a3860238e99b275e537057c358365f36e3730d7115 Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.289419 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.290650 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.302702 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.302858 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.302945 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.307646 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.307809 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.307858 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.319105 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.329663 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.341555 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.352936 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.366998 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.377015 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.377100 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.377127 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.377152 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.377174 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.377866 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.391591 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.402172 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.405929 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-cni-multus\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.405993 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-system-cni-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406015 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-kubelet\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406067 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406088 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-netns\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406109 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-socket-dir-parent\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406133 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-k8s-cni-cncf-io\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406152 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-cni-bin\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406172 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-hostroot\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406195 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-multus-certs\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406231 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-etc-kubernetes\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406263 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-os-release\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406305 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406327 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-cni-binary-copy\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406351 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-cni-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406371 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-daemon-config\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406393 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-cnibin\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406412 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-conf-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406430 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t7t4\" (UniqueName: \"kubernetes.io/projected/a7dd7e02-4357-4643-8c23-2fb57ba70405-kube-api-access-4t7t4\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.406547 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.406598 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.406581336 +0000 UTC m=+78.278079398 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.406984 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.407027 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.407017667 +0000 UTC m=+78.278515729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.409957 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.416760 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.424460 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.431983 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.441042 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.479281 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.479332 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.479343 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.479363 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.479375 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511667 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-k8s-cni-cncf-io\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511778 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-cni-bin\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511841 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-hostroot\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511784 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-k8s-cni-cncf-io\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511880 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-multus-certs\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511960 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-hostroot\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511971 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-cni-bin\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511994 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-etc-kubernetes\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511965 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-etc-kubernetes\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.512119 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-multus-certs\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.512194 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.512732 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-os-release\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.512895 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.512902 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-os-release\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.512927 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.512964 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513003 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513117 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513127 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-cni-binary-copy\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513158 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513178 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513185 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-cni-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513231 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-daemon-config\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513267 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.513242946 +0000 UTC m=+78.384741028 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513304 5103 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: object "openshift-multus"/"multus-daemon-config" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513323 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-cnibin\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513366 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-conf-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513372 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.513349958 +0000 UTC m=+78.384848030 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513416 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4t7t4\" (UniqueName: \"kubernetes.io/projected/a7dd7e02-4357-4643-8c23-2fb57ba70405-kube-api-access-4t7t4\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513507 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-cni-multus\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513593 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-system-cni-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513599 5103 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: object "openshift-multus"/"cni-copy-resources" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513624 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-kubelet\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513681 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-cni-binary-copy podName:a7dd7e02-4357-4643-8c23-2fb57ba70405 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.013655906 +0000 UTC m=+77.885153998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-cni-binary-copy") pod "multus-swfns" (UID: "a7dd7e02-4357-4643-8c23-2fb57ba70405") : object "openshift-multus"/"cni-copy-resources" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513700 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-kubelet\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513761 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-cni-multus\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513813 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-netns\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513856 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-socket-dir-parent\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513859 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-cnibin\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513880 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-cni-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513924 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-conf-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513926 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-socket-dir-parent\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513958 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-daemon-config podName:a7dd7e02-4357-4643-8c23-2fb57ba70405 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.013935672 +0000 UTC m=+77.885433764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-daemon-config") pod "multus-swfns" (UID: "a7dd7e02-4357-4643-8c23-2fb57ba70405") : object "openshift-multus"/"multus-daemon-config" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513998 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-system-cni-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.514084 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-netns\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.530080 5103 projected.go:289] Couldn't get configMap openshift-multus/kube-root-ca.crt: object "openshift-multus"/"kube-root-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.530130 5103 projected.go:289] Couldn't get configMap openshift-multus/openshift-service-ca.crt: object "openshift-multus"/"openshift-service-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.530147 5103 projected.go:194] Error preparing data for projected volume kube-api-access-4t7t4 for pod openshift-multus/multus-swfns: [object "openshift-multus"/"kube-root-ca.crt" not registered, object "openshift-multus"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.530231 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a7dd7e02-4357-4643-8c23-2fb57ba70405-kube-api-access-4t7t4 podName:a7dd7e02-4357-4643-8c23-2fb57ba70405 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.030208097 +0000 UTC m=+77.901706209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4t7t4" (UniqueName: "kubernetes.io/projected/a7dd7e02-4357-4643-8c23-2fb57ba70405-kube-api-access-4t7t4") pod "multus-swfns" (UID: "a7dd7e02-4357-4643-8c23-2fb57ba70405") : [object "openshift-multus"/"kube-root-ca.crt" not registered, object "openshift-multus"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.581627 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.581759 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.581779 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.581807 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.581863 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.615225 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-89lmd\" (UniqueName: \"kubernetes.io/projected/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-kube-api-access-89lmd\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.620858 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.621038 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-89lmd\" (UniqueName: \"kubernetes.io/projected/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-kube-api-access-89lmd\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.621720 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.625695 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.626087 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.626140 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.626217 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.626302 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.641564 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.655112 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:27 crc kubenswrapper[5103]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:27 crc kubenswrapper[5103]: set -uo pipefail Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 30 00:11:27 crc kubenswrapper[5103]: HOSTS_FILE="/etc/hosts" Jan 30 00:11:27 crc kubenswrapper[5103]: TEMP_FILE="/tmp/hosts.tmp" Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: # Make a temporary file with the old hosts file's attributes. Jan 30 00:11:27 crc kubenswrapper[5103]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 30 00:11:27 crc kubenswrapper[5103]: echo "Failed to preserve hosts file. Exiting." Jan 30 00:11:27 crc kubenswrapper[5103]: exit 1 Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: while true; do Jan 30 00:11:27 crc kubenswrapper[5103]: declare -A svc_ips Jan 30 00:11:27 crc kubenswrapper[5103]: for svc in "${services[@]}"; do Jan 30 00:11:27 crc kubenswrapper[5103]: # Fetch service IP from cluster dns if present. We make several tries Jan 30 00:11:27 crc kubenswrapper[5103]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 30 00:11:27 crc kubenswrapper[5103]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 30 00:11:27 crc kubenswrapper[5103]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 30 00:11:27 crc kubenswrapper[5103]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:27 crc kubenswrapper[5103]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:27 crc kubenswrapper[5103]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:27 crc kubenswrapper[5103]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 30 00:11:27 crc kubenswrapper[5103]: for i in ${!cmds[*]} Jan 30 00:11:27 crc kubenswrapper[5103]: do Jan 30 00:11:27 crc kubenswrapper[5103]: ips=($(eval "${cmds[i]}")) Jan 30 00:11:27 crc kubenswrapper[5103]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 30 00:11:27 crc kubenswrapper[5103]: svc_ips["${svc}"]="${ips[@]}" Jan 30 00:11:27 crc kubenswrapper[5103]: break Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: done Jan 30 00:11:27 crc kubenswrapper[5103]: done Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: # Update /etc/hosts only if we get valid service IPs Jan 30 00:11:27 crc kubenswrapper[5103]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 30 00:11:27 crc kubenswrapper[5103]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 30 00:11:27 crc kubenswrapper[5103]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 30 00:11:27 crc kubenswrapper[5103]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 30 00:11:27 crc kubenswrapper[5103]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 30 00:11:27 crc kubenswrapper[5103]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 30 00:11:27 crc kubenswrapper[5103]: sleep 60 & wait Jan 30 00:11:27 crc kubenswrapper[5103]: continue Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: # Append resolver entries for services Jan 30 00:11:27 crc kubenswrapper[5103]: rc=0 Jan 30 00:11:27 crc kubenswrapper[5103]: for svc in "${!svc_ips[@]}"; do Jan 30 00:11:27 crc kubenswrapper[5103]: for ip in ${svc_ips[${svc}]}; do Jan 30 00:11:27 crc kubenswrapper[5103]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 30 00:11:27 crc kubenswrapper[5103]: done Jan 30 00:11:27 crc kubenswrapper[5103]: done Jan 30 00:11:27 crc kubenswrapper[5103]: if [[ $rc -ne 0 ]]; then Jan 30 00:11:27 crc kubenswrapper[5103]: sleep 60 & wait Jan 30 00:11:27 crc kubenswrapper[5103]: continue Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 30 00:11:27 crc kubenswrapper[5103]: # Replace /etc/hosts with our modified version if needed Jan 30 00:11:27 crc kubenswrapper[5103]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 30 00:11:27 crc kubenswrapper[5103]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: sleep 60 & wait Jan 30 00:11:27 crc kubenswrapper[5103]: unset svc_ips Jan 30 00:11:27 crc kubenswrapper[5103]: done Jan 30 00:11:27 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89lmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bs8rz_openshift-dns(ef3f9074-af3f-43f4-ad74-efe1ba4abc8e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:27 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.656339 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bs8rz" podUID="ef3f9074-af3f-43f4-ad74-efe1ba4abc8e" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.656337 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.670160 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.685249 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.685318 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.685347 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.685380 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.685405 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.687597 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.702682 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.716329 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/37f6985e-a0c9-43c8-a1bc-00f85204425f-rootfs\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.716583 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtw8v\" (UniqueName: \"kubernetes.io/projected/37f6985e-a0c9-43c8-a1bc-00f85204425f-kube-api-access-jtw8v\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.716729 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37f6985e-a0c9-43c8-a1bc-00f85204425f-mcd-auth-proxy-config\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.716756 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37f6985e-a0c9-43c8-a1bc-00f85204425f-proxy-tls\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.719026 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.737734 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.753165 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.788443 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.788501 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.788514 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.788538 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.788555 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.817893 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/37f6985e-a0c9-43c8-a1bc-00f85204425f-rootfs\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.818007 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtw8v\" (UniqueName: \"kubernetes.io/projected/37f6985e-a0c9-43c8-a1bc-00f85204425f-kube-api-access-jtw8v\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.818013 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/37f6985e-a0c9-43c8-a1bc-00f85204425f-rootfs\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.818099 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37f6985e-a0c9-43c8-a1bc-00f85204425f-mcd-auth-proxy-config\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.818127 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37f6985e-a0c9-43c8-a1bc-00f85204425f-proxy-tls\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.818236 5103 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: object "openshift-machine-config-operator"/"proxy-tls" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.818311 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37f6985e-a0c9-43c8-a1bc-00f85204425f-proxy-tls podName:37f6985e-a0c9-43c8-a1bc-00f85204425f nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.318289821 +0000 UTC m=+78.189787873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/37f6985e-a0c9-43c8-a1bc-00f85204425f-proxy-tls") pod "machine-config-daemon-6g6hp" (UID: "37f6985e-a0c9-43c8-a1bc-00f85204425f") : object "openshift-machine-config-operator"/"proxy-tls" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.818314 5103 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: object "openshift-machine-config-operator"/"kube-rbac-proxy" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.818447 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37f6985e-a0c9-43c8-a1bc-00f85204425f-mcd-auth-proxy-config podName:37f6985e-a0c9-43c8-a1bc-00f85204425f nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.318422655 +0000 UTC m=+78.189920727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/37f6985e-a0c9-43c8-a1bc-00f85204425f-mcd-auth-proxy-config") pod "machine-config-daemon-6g6hp" (UID: "37f6985e-a0c9-43c8-a1bc-00f85204425f") : object "openshift-machine-config-operator"/"kube-rbac-proxy" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.834435 5103 projected.go:289] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: object "openshift-machine-config-operator"/"kube-root-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.834487 5103 projected.go:289] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: object "openshift-machine-config-operator"/"openshift-service-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.834502 5103 projected.go:194] Error preparing data for projected volume kube-api-access-jtw8v for pod openshift-machine-config-operator/machine-config-daemon-6g6hp: [object "openshift-machine-config-operator"/"kube-root-ca.crt" not registered, object "openshift-machine-config-operator"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.834582 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37f6985e-a0c9-43c8-a1bc-00f85204425f-kube-api-access-jtw8v podName:37f6985e-a0c9-43c8-a1bc-00f85204425f nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.334559306 +0000 UTC m=+78.206057358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jtw8v" (UniqueName: "kubernetes.io/projected/37f6985e-a0c9-43c8-a1bc-00f85204425f-kube-api-access-jtw8v") pod "machine-config-daemon-6g6hp" (UID: "37f6985e-a0c9-43c8-a1bc-00f85204425f") : [object "openshift-machine-config-operator"/"kube-root-ca.crt" not registered, object "openshift-machine-config-operator"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.867854 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.872098 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.872807 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.872858 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.872874 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.874925 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.883452 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.890100 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.890153 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.890172 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.890191 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.890204 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.893939 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.894386 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.896305 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.896692 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.896984 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.897289 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.897588 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.897771 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.897991 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.898225 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.898313 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.898479 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.898833 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919004 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-host\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919071 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d918c96-a16b-4836-ac5a-83c3388f5468-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919099 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq678\" (UniqueName: \"kubernetes.io/projected/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-kube-api-access-sq678\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919124 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919355 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-serviceca\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919754 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919899 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prndc\" (UniqueName: \"kubernetes.io/projected/7d918c96-a16b-4836-ac5a-83c3388f5468-kube-api-access-prndc\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.921112 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.929900 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.940631 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.950499 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.960154 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.970619 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.978836 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.989889 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.992014 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.992057 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.992066 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.992082 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.992093 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.002579 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.012763 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.020763 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.020973 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-serviceca\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.021087 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.021227 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.021853 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxxsl\" (UniqueName: \"kubernetes.io/projected/566ee5b2-938f-41f6-8625-e8a987181d60-kube-api-access-zxxsl\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.021987 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-prndc\" (UniqueName: \"kubernetes.io/projected/7d918c96-a16b-4836-ac5a-83c3388f5468-kube-api-access-prndc\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022118 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-cni-binary-copy\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.021886 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.021808 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022436 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-serviceca\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022450 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-daemon-config\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022550 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-host\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022621 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d918c96-a16b-4836-ac5a-83c3388f5468-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022672 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sq678\" (UniqueName: \"kubernetes.io/projected/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-kube-api-access-sq678\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022709 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-host\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.023095 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-cni-binary-copy\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.023364 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-daemon-config\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.024455 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.033502 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.035533 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d918c96-a16b-4836-ac5a-83c3388f5468-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.036843 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.036974 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.039779 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq678\" (UniqueName: \"kubernetes.io/projected/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-kube-api-access-sq678\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.042512 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.045396 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-prndc\" (UniqueName: \"kubernetes.io/projected/7d918c96-a16b-4836-ac5a-83c3388f5468-kube-api-access-prndc\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.051231 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.060467 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.068320 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.077068 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.087630 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.095083 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.095127 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.095137 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.095153 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.095168 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.098315 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.105599 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.113923 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.122101 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123220 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123288 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-system-cni-dir\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123351 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zxxsl\" (UniqueName: \"kubernetes.io/projected/566ee5b2-938f-41f6-8625-e8a987181d60-kube-api-access-zxxsl\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123439 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-os-release\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.123491 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123503 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cnibin\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123724 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.123787 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.623763047 +0000 UTC m=+78.495261109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123960 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4t7t4\" (UniqueName: \"kubernetes.io/projected/a7dd7e02-4357-4643-8c23-2fb57ba70405-kube-api-access-4t7t4\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.124094 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-binary-copy\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.124209 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.124344 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.124420 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4b7\" (UniqueName: \"kubernetes.io/projected/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-kube-api-access-bf4b7\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.131271 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t7t4\" (UniqueName: \"kubernetes.io/projected/a7dd7e02-4357-4643-8c23-2fb57ba70405-kube-api-access-4t7t4\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.133078 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.141918 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxxsl\" (UniqueName: \"kubernetes.io/projected/566ee5b2-938f-41f6-8625-e8a987181d60-kube-api-access-zxxsl\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.163025 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.172253 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.180747 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.187598 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.194716 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.197512 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.197557 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.197572 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.197592 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.197604 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.208019 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.210956 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.217440 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.220550 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225643 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bf4b7\" (UniqueName: \"kubernetes.io/projected/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-kube-api-access-bf4b7\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225759 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-system-cni-dir\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225815 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-os-release\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225871 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cnibin\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225901 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225942 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-binary-copy\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225973 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225977 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-system-cni-dir\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225999 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.226124 5103 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: object "openshift-multus"/"default-cni-sysctl-allowlist" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.226224 5103 configmap.go:193] Couldn't get configMap openshift-multus/whereabouts-flatfile-config: object "openshift-multus"/"whereabouts-flatfile-config" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.226245 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-sysctl-allowlist podName:2ed60012-d4e8-45fd-b124-fe7d6ca49ca0 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.726219625 +0000 UTC m=+78.597717677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-6tmbq" (UID: "2ed60012-d4e8-45fd-b124-fe7d6ca49ca0") : object "openshift-multus"/"default-cni-sysctl-allowlist" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.226264 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cnibin\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.226326 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-whereabouts-flatfile-configmap podName:2ed60012-d4e8-45fd-b124-fe7d6ca49ca0 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.726298327 +0000 UTC m=+78.597796379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whereabouts-flatfile-configmap" (UniqueName: "kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-whereabouts-flatfile-configmap") pod "multus-additional-cni-plugins-6tmbq" (UID: "2ed60012-d4e8-45fd-b124-fe7d6ca49ca0") : object "openshift-multus"/"whereabouts-flatfile-config" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.226186 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-os-release\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.226384 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.226929 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-binary-copy\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: W0130 00:11:28.241283 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0b75391_d8bb_4610_a69e_1f5c3a4e4eef.slice/crio-48cb9a58d31bf42fa131e5a935c5c0a6958e3b9e8c2227b25fd03f0922daf530 WatchSource:0}: Error finding container 48cb9a58d31bf42fa131e5a935c5c0a6958e3b9e8c2227b25fd03f0922daf530: Status 404 returned error can't find the container with id 48cb9a58d31bf42fa131e5a935c5c0a6958e3b9e8c2227b25fd03f0922daf530 Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.244842 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 30 00:11:28 crc kubenswrapper[5103]: while [ true ]; Jan 30 00:11:28 crc kubenswrapper[5103]: do Jan 30 00:11:28 crc kubenswrapper[5103]: for f in $(ls /tmp/serviceca); do Jan 30 00:11:28 crc kubenswrapper[5103]: echo $f Jan 30 00:11:28 crc kubenswrapper[5103]: ca_file_path="/tmp/serviceca/${f}" Jan 30 00:11:28 crc kubenswrapper[5103]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 30 00:11:28 crc kubenswrapper[5103]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 30 00:11:28 crc kubenswrapper[5103]: if [ -e "${reg_dir_path}" ]; then Jan 30 00:11:28 crc kubenswrapper[5103]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 30 00:11:28 crc kubenswrapper[5103]: else Jan 30 00:11:28 crc kubenswrapper[5103]: mkdir $reg_dir_path Jan 30 00:11:28 crc kubenswrapper[5103]: cp $ca_file_path $reg_dir_path/ca.crt Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: for d in $(ls /etc/docker/certs.d); do Jan 30 00:11:28 crc kubenswrapper[5103]: echo $d Jan 30 00:11:28 crc kubenswrapper[5103]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 30 00:11:28 crc kubenswrapper[5103]: reg_conf_path="/tmp/serviceca/${dp}" Jan 30 00:11:28 crc kubenswrapper[5103]: if [ ! -e "${reg_conf_path}" ]; then Jan 30 00:11:28 crc kubenswrapper[5103]: rm -rf /etc/docker/certs.d/$d Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: sleep 60 & wait ${!} Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sq678,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-226mj_openshift-image-registry(a0b75391-d8bb-4610-a69e-1f5c3a4e4eef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.245117 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:28 crc kubenswrapper[5103]: set -euo pipefail Jan 30 00:11:28 crc kubenswrapper[5103]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 30 00:11:28 crc kubenswrapper[5103]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 30 00:11:28 crc kubenswrapper[5103]: # As the secret mount is optional we must wait for the files to be present. Jan 30 00:11:28 crc kubenswrapper[5103]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 30 00:11:28 crc kubenswrapper[5103]: TS=$(date +%s) Jan 30 00:11:28 crc kubenswrapper[5103]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 30 00:11:28 crc kubenswrapper[5103]: HAS_LOGGED_INFO=0 Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: log_missing_certs(){ Jan 30 00:11:28 crc kubenswrapper[5103]: CUR_TS=$(date +%s) Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 30 00:11:28 crc kubenswrapper[5103]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 30 00:11:28 crc kubenswrapper[5103]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 30 00:11:28 crc kubenswrapper[5103]: HAS_LOGGED_INFO=1 Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: } Jan 30 00:11:28 crc kubenswrapper[5103]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 30 00:11:28 crc kubenswrapper[5103]: log_missing_certs Jan 30 00:11:28 crc kubenswrapper[5103]: sleep 5 Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 30 00:11:28 crc kubenswrapper[5103]: exec /usr/bin/kube-rbac-proxy \ Jan 30 00:11:28 crc kubenswrapper[5103]: --logtostderr \ Jan 30 00:11:28 crc kubenswrapper[5103]: --secure-listen-address=:9108 \ Jan 30 00:11:28 crc kubenswrapper[5103]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 30 00:11:28 crc kubenswrapper[5103]: --upstream=http://127.0.0.1:29108/ \ Jan 30 00:11:28 crc kubenswrapper[5103]: --tls-private-key-file=${TLS_PK} \ Jan 30 00:11:28 crc kubenswrapper[5103]: --tls-cert-file=${TLS_CERT} Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prndc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-k7mv6_openshift-ovn-kubernetes(7d918c96-a16b-4836-ac5a-83c3388f5468): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.246283 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-226mj" podUID="a0b75391-d8bb-4610-a69e-1f5c3a4e4eef" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.249150 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf4b7\" (UniqueName: \"kubernetes.io/projected/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-kube-api-access-bf4b7\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.251285 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.254955 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: source "/env/_master" Jan 30 00:11:28 crc kubenswrapper[5103]: set +o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v4_join_subnet_opt= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v6_join_subnet_opt= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v4_transit_switch_subnet_opt= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v6_transit_switch_subnet_opt= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: dns_name_resolver_enabled_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # This is needed so that converting clusters from GA to TP Jan 30 00:11:28 crc kubenswrapper[5103]: # will rollout control plane pods as well Jan 30 00:11:28 crc kubenswrapper[5103]: network_segmentation_enabled_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: multi_network_enabled_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "true" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "true" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "true" != "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: route_advertisements_enable_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: preconfigured_udn_addresses_enable_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # Enable multi-network policy if configured (control-plane always full mode) Jan 30 00:11:28 crc kubenswrapper[5103]: multi_network_policy_enabled_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # Enable admin network policy if configured (control-plane always full mode) Jan 30 00:11:28 crc kubenswrapper[5103]: admin_network_policy_enabled_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "true" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: if [ "shared" == "shared" ]; then Jan 30 00:11:28 crc kubenswrapper[5103]: gateway_mode_flags="--gateway-mode shared" Jan 30 00:11:28 crc kubenswrapper[5103]: elif [ "shared" == "local" ]; then Jan 30 00:11:28 crc kubenswrapper[5103]: gateway_mode_flags="--gateway-mode local" Jan 30 00:11:28 crc kubenswrapper[5103]: else Jan 30 00:11:28 crc kubenswrapper[5103]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 30 00:11:28 crc kubenswrapper[5103]: exit 1 Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 30 00:11:28 crc kubenswrapper[5103]: exec /usr/bin/ovnkube \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-interconnect \ Jan 30 00:11:28 crc kubenswrapper[5103]: --init-cluster-manager "${K8S_NODE}" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 30 00:11:28 crc kubenswrapper[5103]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --metrics-bind-address "127.0.0.1:29108" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --metrics-enable-pprof \ Jan 30 00:11:28 crc kubenswrapper[5103]: --metrics-enable-config-duration \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${ovn_v4_join_subnet_opt} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${ovn_v6_join_subnet_opt} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${dns_name_resolver_enabled_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${persistent_ips_enabled_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${multi_network_enabled_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${network_segmentation_enabled_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${gateway_mode_flags} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${route_advertisements_enable_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${preconfigured_udn_addresses_enable_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-egress-ip=true \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-egress-firewall=true \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-egress-qos=true \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-egress-service=true \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-multicast \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-multi-external-gateway=true \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${multi_network_policy_enabled_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${admin_network_policy_enabled_flag} Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prndc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-k7mv6_openshift-ovn-kubernetes(7d918c96-a16b-4836-ac5a-83c3388f5468): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.256145 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" Jan 30 00:11:28 crc kubenswrapper[5103]: W0130 00:11:28.262355 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7dd7e02_4357_4643_8c23_2fb57ba70405.slice/crio-51fe3ecace5ef60f7821d3b34991b5c62f99813d3db14bf24fd27e56faf1a5e1 WatchSource:0}: Error finding container 51fe3ecace5ef60f7821d3b34991b5c62f99813d3db14bf24fd27e56faf1a5e1: Status 404 returned error can't find the container with id 51fe3ecace5ef60f7821d3b34991b5c62f99813d3db14bf24fd27e56faf1a5e1 Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.265274 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 30 00:11:28 crc kubenswrapper[5103]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 30 00:11:28 crc kubenswrapper[5103]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t7t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-swfns_openshift-multus(a7dd7e02-4357-4643-8c23-2fb57ba70405): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.268430 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-swfns" podUID="a7dd7e02-4357-4643-8c23-2fb57ba70405" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.299660 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.299719 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.299788 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.299812 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.300201 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.324726 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.327361 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.327390 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.327449 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37f6985e-a0c9-43c8-a1bc-00f85204425f-mcd-auth-proxy-config\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.327499 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37f6985e-a0c9-43c8-a1bc-00f85204425f-proxy-tls\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.327649 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.329170 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37f6985e-a0c9-43c8-a1bc-00f85204425f-mcd-auth-proxy-config\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.333778 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37f6985e-a0c9-43c8-a1bc-00f85204425f-proxy-tls\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.336496 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.346623 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.354097 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.363336 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.372224 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.379980 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.401751 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.401808 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.401818 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.401835 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.401847 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.411575 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428400 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-etc-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428617 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428671 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtw8v\" (UniqueName: \"kubernetes.io/projected/37f6985e-a0c9-43c8-a1bc-00f85204425f-kube-api-access-jtw8v\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428707 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-env-overrides\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428747 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-systemd-units\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.428758 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428799 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovn-node-metrics-cert\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.428862 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:30.428839954 +0000 UTC m=+80.300338006 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428950 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-netd\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429008 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-bin\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429032 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429168 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429212 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-netns\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429266 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429290 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-ovn\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429338 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-script-lib\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429364 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2mbn\" (UniqueName: \"kubernetes.io/projected/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-kube-api-access-j2mbn\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.429369 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429423 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-kubelet\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.429447 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:30.429427148 +0000 UTC m=+80.300925210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429491 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-node-log\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429566 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-slash\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429621 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-systemd\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429665 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-var-lib-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429694 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-ovn-kubernetes\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429713 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-config\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429797 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-log-socket\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.434347 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtw8v\" (UniqueName: \"kubernetes.io/projected/37f6985e-a0c9-43c8-a1bc-00f85204425f-kube-api-access-jtw8v\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.452708 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.470897 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.472943 5103 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.472962 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.473192 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486394 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"66f54f17bd1778a867b5d05e7ea42192333e01fca48ef1d056193b6b41ff0669"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486446 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"d61d6c3995a503b66feabc08d51a197a4ec103a2e9d6df32ab81ca26927ce79c"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486467 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486479 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" event={"ID":"7d918c96-a16b-4836-ac5a-83c3388f5468","Type":"ContainerStarted","Data":"578d2296c0b9b147f002bab00ce887ae174a1dfc57c08f5d70b218ff4df99c74"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486490 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-226mj" event={"ID":"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef","Type":"ContainerStarted","Data":"48cb9a58d31bf42fa131e5a935c5c0a6958e3b9e8c2227b25fd03f0922daf530"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486500 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bs8rz" event={"ID":"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e","Type":"ContainerStarted","Data":"1d888c4fadd263fbfa5894c72b0570a279483acc20df2628e05b5ba47677c065"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486511 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"facedfe7e0e5c2c71cd8e1a3860238e99b275e537057c358365f36e3730d7115"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486846 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.487004 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.487083 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.487150 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.488057 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.488358 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:28 crc kubenswrapper[5103]: set -uo pipefail Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 30 00:11:28 crc kubenswrapper[5103]: HOSTS_FILE="/etc/hosts" Jan 30 00:11:28 crc kubenswrapper[5103]: TEMP_FILE="/tmp/hosts.tmp" Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # Make a temporary file with the old hosts file's attributes. Jan 30 00:11:28 crc kubenswrapper[5103]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 30 00:11:28 crc kubenswrapper[5103]: echo "Failed to preserve hosts file. Exiting." Jan 30 00:11:28 crc kubenswrapper[5103]: exit 1 Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: while true; do Jan 30 00:11:28 crc kubenswrapper[5103]: declare -A svc_ips Jan 30 00:11:28 crc kubenswrapper[5103]: for svc in "${services[@]}"; do Jan 30 00:11:28 crc kubenswrapper[5103]: # Fetch service IP from cluster dns if present. We make several tries Jan 30 00:11:28 crc kubenswrapper[5103]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 30 00:11:28 crc kubenswrapper[5103]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 30 00:11:28 crc kubenswrapper[5103]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 30 00:11:28 crc kubenswrapper[5103]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:28 crc kubenswrapper[5103]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:28 crc kubenswrapper[5103]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:28 crc kubenswrapper[5103]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 30 00:11:28 crc kubenswrapper[5103]: for i in ${!cmds[*]} Jan 30 00:11:28 crc kubenswrapper[5103]: do Jan 30 00:11:28 crc kubenswrapper[5103]: ips=($(eval "${cmds[i]}")) Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: svc_ips["${svc}"]="${ips[@]}" Jan 30 00:11:28 crc kubenswrapper[5103]: break Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # Update /etc/hosts only if we get valid service IPs Jan 30 00:11:28 crc kubenswrapper[5103]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 30 00:11:28 crc kubenswrapper[5103]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 30 00:11:28 crc kubenswrapper[5103]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 30 00:11:28 crc kubenswrapper[5103]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 30 00:11:28 crc kubenswrapper[5103]: sleep 60 & wait Jan 30 00:11:28 crc kubenswrapper[5103]: continue Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # Append resolver entries for services Jan 30 00:11:28 crc kubenswrapper[5103]: rc=0 Jan 30 00:11:28 crc kubenswrapper[5103]: for svc in "${!svc_ips[@]}"; do Jan 30 00:11:28 crc kubenswrapper[5103]: for ip in ${svc_ips[${svc}]}; do Jan 30 00:11:28 crc kubenswrapper[5103]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ $rc -ne 0 ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: sleep 60 & wait Jan 30 00:11:28 crc kubenswrapper[5103]: continue Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 30 00:11:28 crc kubenswrapper[5103]: # Replace /etc/hosts with our modified version if needed Jan 30 00:11:28 crc kubenswrapper[5103]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 30 00:11:28 crc kubenswrapper[5103]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: sleep 60 & wait Jan 30 00:11:28 crc kubenswrapper[5103]: unset svc_ips Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89lmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bs8rz_openshift-dns(ef3f9074-af3f-43f4-ad74-efe1ba4abc8e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.489319 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.489420 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bs8rz" podUID="ef3f9074-af3f-43f4-ad74-efe1ba4abc8e" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.490119 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: source "/env/_master" Jan 30 00:11:28 crc kubenswrapper[5103]: set +o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 30 00:11:28 crc kubenswrapper[5103]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 30 00:11:28 crc kubenswrapper[5103]: ho_enable="--enable-hybrid-overlay" Jan 30 00:11:28 crc kubenswrapper[5103]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 30 00:11:28 crc kubenswrapper[5103]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 30 00:11:28 crc kubenswrapper[5103]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 30 00:11:28 crc kubenswrapper[5103]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:11:28 crc kubenswrapper[5103]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --webhook-host=127.0.0.1 \ Jan 30 00:11:28 crc kubenswrapper[5103]: --webhook-port=9743 \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${ho_enable} \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-interconnect \ Jan 30 00:11:28 crc kubenswrapper[5103]: --disable-approver \ Jan 30 00:11:28 crc kubenswrapper[5103]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --wait-for-kubernetes-api=200s \ Jan 30 00:11:28 crc kubenswrapper[5103]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --loglevel="${LOGLEVEL}" Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.491216 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.495128 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: source "/env/_master" Jan 30 00:11:28 crc kubenswrapper[5103]: set +o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 30 00:11:28 crc kubenswrapper[5103]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:11:28 crc kubenswrapper[5103]: --disable-webhook \ Jan 30 00:11:28 crc kubenswrapper[5103]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --loglevel="${LOGLEVEL}" Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.497805 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.499222 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.499303 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:28 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: source /etc/kubernetes/apiserver-url.env Jan 30 00:11:28 crc kubenswrapper[5103]: else Jan 30 00:11:28 crc kubenswrapper[5103]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 30 00:11:28 crc kubenswrapper[5103]: exit 1 Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.500460 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.504464 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.504505 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.504523 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.504571 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.504837 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: W0130 00:11:28.504865 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37f6985e_a0c9_43c8_a1bc_00f85204425f.slice/crio-3c42330e1db35d226e4d0bab62f5575af608323ffc3993bc0551e0f8e21f70b3 WatchSource:0}: Error finding container 3c42330e1db35d226e4d0bab62f5575af608323ffc3993bc0551e0f8e21f70b3: Status 404 returned error can't find the container with id 3c42330e1db35d226e4d0bab62f5575af608323ffc3993bc0551e0f8e21f70b3 Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.507129 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtw8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6g6hp_openshift-machine-config-operator(37f6985e-a0c9-43c8-a1bc-00f85204425f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.508372 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.509593 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtw8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6g6hp_openshift-machine-config-operator(37f6985e-a0c9-43c8-a1bc-00f85204425f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.510916 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.529103 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530720 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530781 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530824 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530868 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530902 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530937 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530969 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531004 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531039 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531097 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531135 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531170 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531203 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531236 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531271 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531397 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531432 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531466 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531500 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531538 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531564 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531570 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531616 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531635 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531653 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531675 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531693 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531710 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531725 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531745 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531762 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531779 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531798 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531792 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531815 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531832 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531849 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531867 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531885 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531902 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531918 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531934 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531954 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531970 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531987 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532031 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532074 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532093 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532109 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532128 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532194 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532211 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532226 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532247 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532272 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532294 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532313 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532406 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532454 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533014 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533118 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533161 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.533212 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:29.033184397 +0000 UTC m=+78.904682479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533261 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533303 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533313 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533433 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533465 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533522 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533554 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533613 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533641 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533695 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533723 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533840 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533867 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533926 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533992 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534020 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534011 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534083 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534118 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534171 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534198 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534254 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534282 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534334 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534364 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534416 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534444 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534496 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534523 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534572 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534604 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534734 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.535153 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536277 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536345 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536706 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536744 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536915 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536929 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536089 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.535565 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537588 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537688 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537733 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537776 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537813 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537848 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537895 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537932 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537968 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538015 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538088 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538132 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538165 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538202 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538244 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538280 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538317 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538361 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538399 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538859 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.539734 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540506 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537687 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540721 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540590 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537886 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540825 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537985 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540855 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540887 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540914 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540940 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540969 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540995 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541023 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541067 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541092 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541122 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541149 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541173 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541199 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541225 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541249 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541281 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541309 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541343 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541370 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541395 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541424 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541448 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541477 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541504 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541529 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541556 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541584 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541613 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541640 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541664 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541689 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541713 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541743 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541773 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541806 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541833 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541865 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541892 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541917 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541944 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541972 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542562 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542636 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542660 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542683 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542706 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542725 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542746 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542766 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542787 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542817 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542847 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542869 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542891 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542916 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542937 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542955 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542980 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543001 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543036 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543090 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543121 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543147 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543572 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544715 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544953 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545188 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545244 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545371 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545427 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545477 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545527 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545572 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545615 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545669 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545712 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545766 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545807 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545855 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545899 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545947 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545988 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546026 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546093 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546134 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546172 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546215 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546259 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547347 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547410 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547550 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547608 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547652 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547694 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547749 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547795 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547838 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547884 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547930 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548467 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548527 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548581 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548644 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548705 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548750 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548811 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548857 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548903 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548972 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549024 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549092 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549137 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549192 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549248 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549285 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538265 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538577 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549377 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-netd\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549473 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-bin\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549521 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549587 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549855 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550028 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-netns\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550130 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550173 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-ovn\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550215 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-script-lib\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550258 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j2mbn\" (UniqueName: \"kubernetes.io/projected/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-kube-api-access-j2mbn\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550322 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-kubelet\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550361 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-node-log\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550434 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-slash\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550473 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-systemd\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550531 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-var-lib-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550579 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-ovn-kubernetes\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550621 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-config\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550737 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-log-socket\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550883 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-etc-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551011 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-env-overrides\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551109 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-systemd-units\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551205 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovn-node-metrics-cert\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551347 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551372 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551402 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551426 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551448 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551470 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551495 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551516 5103 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551539 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551560 5103 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551580 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551603 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551623 5103 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551642 5103 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551748 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551773 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551794 5103 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551820 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551841 5103 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551864 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551887 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551909 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551930 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551951 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556503 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-ovn-kubernetes\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556587 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-kubelet\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556625 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-node-log\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556662 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-netns\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538732 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538916 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.539066 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.539527 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.539653 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556979 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.557126 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.557687 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.557786 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.557832 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-ovn\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.557974 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558001 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558045 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558110 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovn-node-metrics-cert\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558159 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-etc-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558162 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558158 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.539895 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.539934 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540281 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540649 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558297 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558320 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540706 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541142 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541252 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541499 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541618 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541719 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541778 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542212 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542024 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558386 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-log-socket\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542652 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542782 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542810 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543706 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543752 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543894 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544347 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540713 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544646 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544734 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544892 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544939 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545169 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545302 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545384 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545520 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545496 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545658 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545744 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545962 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546162 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546415 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546581 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546862 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546878 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546925 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546933 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547104 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547237 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547426 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547658 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547750 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547839 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547974 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548037 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548157 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548172 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548175 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548309 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547412 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548370 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548829 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548880 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548908 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550486 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550834 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550855 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550889 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550951 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551315 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551645 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551861 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551948 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551936 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.552432 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.552571 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.552993 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553070 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553284 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553306 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553546 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553608 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553707 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553715 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553656 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553803 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559451 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559607 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-systemd-units\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553821 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554240 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554360 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559686 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-bin\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559656 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-systemd\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559576 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-netd\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559729 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-slash\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559747 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-var-lib-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559754 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554465 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554493 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554599 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554800 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554893 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554932 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555172 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559721 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555341 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555539 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555647 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555749 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555875 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555973 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556211 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556276 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555607 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556283 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556306 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556342 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.560012 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-script-lib\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.556493 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.560162 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.560181 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.556602 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.560211 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.560225 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.560267 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:30.560248234 +0000 UTC m=+80.431746286 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556793 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559304 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559378 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.560378 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:30.560359607 +0000 UTC m=+80.431857659 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.561132 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.561090 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.561702 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-env-overrides\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.561929 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-config\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.562255 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.562939 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.563085 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.563171 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.563707 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.563814 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.563843 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.564126 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.565505 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.565537 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.565643 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.565799 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.565838 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.566131 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.566186 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.566239 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.566703 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.566847 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.567688 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.568471 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.568489 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.569474 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.569490 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.569602 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.569622 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.569724 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.570299 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.570450 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.570879 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571340 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571584 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571760 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571788 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571846 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571896 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571919 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571957 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.572606 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.572731 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.572964 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.573202 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.573263 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.573333 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.573363 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.573850 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.574284 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.574888 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.575029 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.575345 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.575417 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.580343 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.580408 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.580530 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.580671 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.581730 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.585856 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.586295 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.588734 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.589712 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.590033 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.591342 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.591463 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.591580 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.591766 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.591913 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.592169 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.592281 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.592698 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.592676 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.592999 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.593174 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.596208 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.607818 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2mbn\" (UniqueName: \"kubernetes.io/projected/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-kube-api-access-j2mbn\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.612602 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.612704 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.612761 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.612820 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.612880 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.614669 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.615023 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.622467 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.653396 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.653626 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.653700 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:29.653686012 +0000 UTC m=+79.525184064 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.657349 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658198 5103 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658397 5103 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658462 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658516 5103 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658575 5103 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658634 5103 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658689 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658752 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658811 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658977 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659036 5103 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659107 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659176 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659231 5103 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659290 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659342 5103 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659400 5103 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659452 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659508 5103 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659563 5103 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659621 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659673 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659728 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659784 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659847 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659907 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659968 5103 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660027 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660111 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660179 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660237 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660296 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660348 5103 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660404 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660464 5103 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660521 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660577 5103 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660647 5103 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660712 5103 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660771 5103 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660830 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660911 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660984 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661063 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661131 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661206 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661275 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661342 5103 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661410 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661487 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661552 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661610 5103 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661677 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661749 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661811 5103 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661876 5103 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661935 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662007 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662111 5103 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662184 5103 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662256 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662505 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662565 5103 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662624 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662689 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662741 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662791 5103 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662843 5103 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662897 5103 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662953 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663009 5103 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663079 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663138 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663200 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663252 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663314 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663365 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663415 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663465 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663522 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663583 5103 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663637 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663688 5103 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663744 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663806 5103 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663860 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663921 5103 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663977 5103 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664031 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664139 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664213 5103 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664267 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664404 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664544 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664563 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664579 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659127 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664614 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664794 5103 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664914 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665728 5103 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665751 5103 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665766 5103 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665782 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665797 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665811 5103 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665827 5103 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665841 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665854 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665870 5103 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665886 5103 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665903 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665920 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665934 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665947 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665960 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665972 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665987 5103 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666000 5103 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666012 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666024 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666037 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666067 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666080 5103 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666094 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666107 5103 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666120 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666133 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666146 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666159 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666171 5103 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666185 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666200 5103 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666213 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666227 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666240 5103 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666252 5103 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666264 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666277 5103 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666291 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666303 5103 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666315 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666328 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666341 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666354 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666367 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666380 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666393 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666405 5103 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666418 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666430 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666443 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666455 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666468 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666499 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666514 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666528 5103 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666542 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666555 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666569 5103 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666582 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666594 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666607 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666621 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666633 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666646 5103 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666659 5103 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666671 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666684 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666697 5103 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666712 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666725 5103 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666739 5103 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666751 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666765 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666778 5103 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666790 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666803 5103 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666816 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666829 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666842 5103 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666854 5103 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666870 5103 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666884 5103 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666899 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666912 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666926 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666941 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666954 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666966 5103 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666979 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666992 5103 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.667006 5103 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.667018 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.667033 5103 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.667044 5103 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.667073 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.667085 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.695601 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.715166 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.715202 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.715213 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.715229 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.715242 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.741166 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.768439 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.768543 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.768586 5103 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.769475 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.770566 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.783826 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.818867 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.842957 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.843008 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.843021 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.843041 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.843084 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.845112 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 30 00:11:28 crc kubenswrapper[5103]: apiVersion: v1 Jan 30 00:11:28 crc kubenswrapper[5103]: clusters: Jan 30 00:11:28 crc kubenswrapper[5103]: - cluster: Jan 30 00:11:28 crc kubenswrapper[5103]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 30 00:11:28 crc kubenswrapper[5103]: server: https://api-int.crc.testing:6443 Jan 30 00:11:28 crc kubenswrapper[5103]: name: default-cluster Jan 30 00:11:28 crc kubenswrapper[5103]: contexts: Jan 30 00:11:28 crc kubenswrapper[5103]: - context: Jan 30 00:11:28 crc kubenswrapper[5103]: cluster: default-cluster Jan 30 00:11:28 crc kubenswrapper[5103]: namespace: default Jan 30 00:11:28 crc kubenswrapper[5103]: user: default-auth Jan 30 00:11:28 crc kubenswrapper[5103]: name: default-context Jan 30 00:11:28 crc kubenswrapper[5103]: current-context: default-context Jan 30 00:11:28 crc kubenswrapper[5103]: kind: Config Jan 30 00:11:28 crc kubenswrapper[5103]: preferences: {} Jan 30 00:11:28 crc kubenswrapper[5103]: users: Jan 30 00:11:28 crc kubenswrapper[5103]: - name: default-auth Jan 30 00:11:28 crc kubenswrapper[5103]: user: Jan 30 00:11:28 crc kubenswrapper[5103]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:11:28 crc kubenswrapper[5103]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:11:28 crc kubenswrapper[5103]: EOF Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2mbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-8lwjf_openshift-ovn-kubernetes(b3efa2c9-9a52-46ea-b9ad-f708dd386e79): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.846259 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.855461 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.864641 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.867250 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.867368 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.870838 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.871489 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.873754 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.877068 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.881470 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.887455 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.888640 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.893930 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.900555 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.901105 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.915175 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.917567 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.923566 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.924727 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.934974 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.940692 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.941197 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.942602 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.942747 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.943274 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.946028 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.947575 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.948929 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.948967 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.948981 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.949008 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.949021 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.949511 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.952539 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: W0130 00:11:28.953200 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ed60012_d4e8_45fd_b124_fe7d6ca49ca0.slice/crio-eba47102c7748ce3ffb181f4710f6ca16d50782debd82d314ec6bcbfe89c3349 WatchSource:0}: Error finding container eba47102c7748ce3ffb181f4710f6ca16d50782debd82d314ec6bcbfe89c3349: Status 404 returned error can't find the container with id eba47102c7748ce3ffb181f4710f6ca16d50782debd82d314ec6bcbfe89c3349 Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.955535 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bf4b7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-6tmbq_openshift-multus(2ed60012-d4e8-45fd-b124-fe7d6ca49ca0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.956769 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" podUID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.973356 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.985192 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.986972 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.990676 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.992685 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.998245 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.014770 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.016362 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.017173 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.033974 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.034691 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.051529 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.051724 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.051852 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.051990 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.052143 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.062113 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.062492 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.064224 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.068822 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.070821 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.071064 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:30.071030004 +0000 UTC m=+79.942528056 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.072239 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.073682 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.074656 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.075763 5103 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.075892 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.094094 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.106319 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.136139 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.137912 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.141689 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.154897 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.154931 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.154944 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.154961 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.154976 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.155942 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.156878 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.158843 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.160079 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.160731 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.162695 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.164524 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.166970 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.168743 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.170549 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.172172 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.176376 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.177781 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.179080 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.197558 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.198459 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.200519 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.215634 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.215828 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.253483 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"eba47102c7748ce3ffb181f4710f6ca16d50782debd82d314ec6bcbfe89c3349"} Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.256364 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bf4b7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-6tmbq_openshift-multus(2ed60012-d4e8-45fd-b124-fe7d6ca49ca0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.257154 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.259727 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.259757 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.259768 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.259785 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.259798 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.260103 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"38221fc62e1b3d592b338664053e425c486a6c0fa3cf8ead449229dbfc4659da"} Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.261162 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" podUID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.263144 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:29 crc kubenswrapper[5103]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 30 00:11:29 crc kubenswrapper[5103]: apiVersion: v1 Jan 30 00:11:29 crc kubenswrapper[5103]: clusters: Jan 30 00:11:29 crc kubenswrapper[5103]: - cluster: Jan 30 00:11:29 crc kubenswrapper[5103]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 30 00:11:29 crc kubenswrapper[5103]: server: https://api-int.crc.testing:6443 Jan 30 00:11:29 crc kubenswrapper[5103]: name: default-cluster Jan 30 00:11:29 crc kubenswrapper[5103]: contexts: Jan 30 00:11:29 crc kubenswrapper[5103]: - context: Jan 30 00:11:29 crc kubenswrapper[5103]: cluster: default-cluster Jan 30 00:11:29 crc kubenswrapper[5103]: namespace: default Jan 30 00:11:29 crc kubenswrapper[5103]: user: default-auth Jan 30 00:11:29 crc kubenswrapper[5103]: name: default-context Jan 30 00:11:29 crc kubenswrapper[5103]: current-context: default-context Jan 30 00:11:29 crc kubenswrapper[5103]: kind: Config Jan 30 00:11:29 crc kubenswrapper[5103]: preferences: {} Jan 30 00:11:29 crc kubenswrapper[5103]: users: Jan 30 00:11:29 crc kubenswrapper[5103]: - name: default-auth Jan 30 00:11:29 crc kubenswrapper[5103]: user: Jan 30 00:11:29 crc kubenswrapper[5103]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:11:29 crc kubenswrapper[5103]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:11:29 crc kubenswrapper[5103]: EOF Jan 30 00:11:29 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2mbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-8lwjf_openshift-ovn-kubernetes(b3efa2c9-9a52-46ea-b9ad-f708dd386e79): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:29 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.264730 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.264832 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swfns" event={"ID":"a7dd7e02-4357-4643-8c23-2fb57ba70405","Type":"ContainerStarted","Data":"51fe3ecace5ef60f7821d3b34991b5c62f99813d3db14bf24fd27e56faf1a5e1"} Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.267201 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:29 crc kubenswrapper[5103]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 30 00:11:29 crc kubenswrapper[5103]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 30 00:11:29 crc kubenswrapper[5103]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t7t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-swfns_openshift-multus(a7dd7e02-4357-4643-8c23-2fb57ba70405): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:29 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.267701 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"3c42330e1db35d226e4d0bab62f5575af608323ffc3993bc0551e0f8e21f70b3"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.268873 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.269440 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.269523 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-swfns" podUID="a7dd7e02-4357-4643-8c23-2fb57ba70405" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.271016 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtw8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6g6hp_openshift-machine-config-operator(37f6985e-a0c9-43c8-a1bc-00f85204425f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.272369 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:29 crc kubenswrapper[5103]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 30 00:11:29 crc kubenswrapper[5103]: while [ true ]; Jan 30 00:11:29 crc kubenswrapper[5103]: do Jan 30 00:11:29 crc kubenswrapper[5103]: for f in $(ls /tmp/serviceca); do Jan 30 00:11:29 crc kubenswrapper[5103]: echo $f Jan 30 00:11:29 crc kubenswrapper[5103]: ca_file_path="/tmp/serviceca/${f}" Jan 30 00:11:29 crc kubenswrapper[5103]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 30 00:11:29 crc kubenswrapper[5103]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 30 00:11:29 crc kubenswrapper[5103]: if [ -e "${reg_dir_path}" ]; then Jan 30 00:11:29 crc kubenswrapper[5103]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 30 00:11:29 crc kubenswrapper[5103]: else Jan 30 00:11:29 crc kubenswrapper[5103]: mkdir $reg_dir_path Jan 30 00:11:29 crc kubenswrapper[5103]: cp $ca_file_path $reg_dir_path/ca.crt Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: done Jan 30 00:11:29 crc kubenswrapper[5103]: for d in $(ls /etc/docker/certs.d); do Jan 30 00:11:29 crc kubenswrapper[5103]: echo $d Jan 30 00:11:29 crc kubenswrapper[5103]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 30 00:11:29 crc kubenswrapper[5103]: reg_conf_path="/tmp/serviceca/${dp}" Jan 30 00:11:29 crc kubenswrapper[5103]: if [ ! -e "${reg_conf_path}" ]; then Jan 30 00:11:29 crc kubenswrapper[5103]: rm -rf /etc/docker/certs.d/$d Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: done Jan 30 00:11:29 crc kubenswrapper[5103]: sleep 60 & wait ${!} Jan 30 00:11:29 crc kubenswrapper[5103]: done Jan 30 00:11:29 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sq678,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-226mj_openshift-image-registry(a0b75391-d8bb-4610-a69e-1f5c3a4e4eef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:29 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.272855 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:29 crc kubenswrapper[5103]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:29 crc kubenswrapper[5103]: set -euo pipefail Jan 30 00:11:29 crc kubenswrapper[5103]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 30 00:11:29 crc kubenswrapper[5103]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 30 00:11:29 crc kubenswrapper[5103]: # As the secret mount is optional we must wait for the files to be present. Jan 30 00:11:29 crc kubenswrapper[5103]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 30 00:11:29 crc kubenswrapper[5103]: TS=$(date +%s) Jan 30 00:11:29 crc kubenswrapper[5103]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 30 00:11:29 crc kubenswrapper[5103]: HAS_LOGGED_INFO=0 Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: log_missing_certs(){ Jan 30 00:11:29 crc kubenswrapper[5103]: CUR_TS=$(date +%s) Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 30 00:11:29 crc kubenswrapper[5103]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 30 00:11:29 crc kubenswrapper[5103]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 30 00:11:29 crc kubenswrapper[5103]: HAS_LOGGED_INFO=1 Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: } Jan 30 00:11:29 crc kubenswrapper[5103]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 30 00:11:29 crc kubenswrapper[5103]: log_missing_certs Jan 30 00:11:29 crc kubenswrapper[5103]: sleep 5 Jan 30 00:11:29 crc kubenswrapper[5103]: done Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 30 00:11:29 crc kubenswrapper[5103]: exec /usr/bin/kube-rbac-proxy \ Jan 30 00:11:29 crc kubenswrapper[5103]: --logtostderr \ Jan 30 00:11:29 crc kubenswrapper[5103]: --secure-listen-address=:9108 \ Jan 30 00:11:29 crc kubenswrapper[5103]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 30 00:11:29 crc kubenswrapper[5103]: --upstream=http://127.0.0.1:29108/ \ Jan 30 00:11:29 crc kubenswrapper[5103]: --tls-private-key-file=${TLS_PK} \ Jan 30 00:11:29 crc kubenswrapper[5103]: --tls-cert-file=${TLS_CERT} Jan 30 00:11:29 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prndc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-k7mv6_openshift-ovn-kubernetes(7d918c96-a16b-4836-ac5a-83c3388f5468): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:29 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.273985 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-226mj" podUID="a0b75391-d8bb-4610-a69e-1f5c3a4e4eef" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.274719 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtw8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6g6hp_openshift-machine-config-operator(37f6985e-a0c9-43c8-a1bc-00f85204425f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.276238 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:29 crc kubenswrapper[5103]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:29 crc kubenswrapper[5103]: source "/env/_master" Jan 30 00:11:29 crc kubenswrapper[5103]: set +o allexport Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v4_join_subnet_opt= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v6_join_subnet_opt= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v4_transit_switch_subnet_opt= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v6_transit_switch_subnet_opt= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: dns_name_resolver_enabled_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: # This is needed so that converting clusters from GA to TP Jan 30 00:11:29 crc kubenswrapper[5103]: # will rollout control plane pods as well Jan 30 00:11:29 crc kubenswrapper[5103]: network_segmentation_enabled_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: multi_network_enabled_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "true" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "true" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "true" != "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: route_advertisements_enable_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: preconfigured_udn_addresses_enable_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: # Enable multi-network policy if configured (control-plane always full mode) Jan 30 00:11:29 crc kubenswrapper[5103]: multi_network_policy_enabled_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: # Enable admin network policy if configured (control-plane always full mode) Jan 30 00:11:29 crc kubenswrapper[5103]: admin_network_policy_enabled_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "true" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: if [ "shared" == "shared" ]; then Jan 30 00:11:29 crc kubenswrapper[5103]: gateway_mode_flags="--gateway-mode shared" Jan 30 00:11:29 crc kubenswrapper[5103]: elif [ "shared" == "local" ]; then Jan 30 00:11:29 crc kubenswrapper[5103]: gateway_mode_flags="--gateway-mode local" Jan 30 00:11:29 crc kubenswrapper[5103]: else Jan 30 00:11:29 crc kubenswrapper[5103]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 30 00:11:29 crc kubenswrapper[5103]: exit 1 Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 30 00:11:29 crc kubenswrapper[5103]: exec /usr/bin/ovnkube \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-interconnect \ Jan 30 00:11:29 crc kubenswrapper[5103]: --init-cluster-manager "${K8S_NODE}" \ Jan 30 00:11:29 crc kubenswrapper[5103]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 30 00:11:29 crc kubenswrapper[5103]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 30 00:11:29 crc kubenswrapper[5103]: --metrics-bind-address "127.0.0.1:29108" \ Jan 30 00:11:29 crc kubenswrapper[5103]: --metrics-enable-pprof \ Jan 30 00:11:29 crc kubenswrapper[5103]: --metrics-enable-config-duration \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${ovn_v4_join_subnet_opt} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${ovn_v6_join_subnet_opt} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${dns_name_resolver_enabled_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${persistent_ips_enabled_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${multi_network_enabled_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${network_segmentation_enabled_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${gateway_mode_flags} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${route_advertisements_enable_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${preconfigured_udn_addresses_enable_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-egress-ip=true \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-egress-firewall=true \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-egress-qos=true \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-egress-service=true \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-multicast \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-multi-external-gateway=true \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${multi_network_policy_enabled_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${admin_network_policy_enabled_flag} Jan 30 00:11:29 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prndc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-k7mv6_openshift-ovn-kubernetes(7d918c96-a16b-4836-ac5a-83c3388f5468): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:29 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.276473 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.277896 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.295311 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.336686 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.361563 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.361607 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.361617 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.361635 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.361647 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.374904 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.414647 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.457973 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.464005 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.464042 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.464077 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.464098 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.464113 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.496170 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.538647 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.566118 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.566214 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.566233 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.566258 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.566277 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.579944 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.619520 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.656828 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.672015 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.672103 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.672122 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.672150 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.672174 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.680535 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.680773 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.681043 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:31.681005483 +0000 UTC m=+81.552503575 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.714943 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.736279 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.743447 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.743503 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.743523 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.743547 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.743563 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.758193 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.762380 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.762447 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.762461 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.762479 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.762493 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.778388 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.778500 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.784024 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.784196 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.784239 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.784316 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.784384 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.801408 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.805480 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.805519 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.805530 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.805547 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.805557 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.819695 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.820292 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.824637 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.824675 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.824738 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.824752 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.824764 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.839839 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.839991 5103 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.841586 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.841628 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.841640 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.841654 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.841664 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.854907 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.867673 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.867877 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.867905 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.868109 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.868166 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.868228 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.895652 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.935384 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.943925 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.943968 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.943983 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.944006 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.944023 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.989571 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.015928 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.046944 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.046998 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.047017 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.047044 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.047110 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.062573 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.088898 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.089117 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:32.0890861 +0000 UTC m=+81.960584172 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.096862 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.138110 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.149245 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.149310 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.149333 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.149367 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.149388 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.174430 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.215942 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.251607 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.251653 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.251664 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.251681 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.251693 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.256820 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.354517 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.355302 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.355352 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.355378 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.355397 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.457668 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.457715 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.457726 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.457744 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.457755 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.492932 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.493145 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.493220 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.493338 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.493377 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:34.493342505 +0000 UTC m=+84.364840597 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.493435 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:34.493412036 +0000 UTC m=+84.364910128 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.560496 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.560578 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.560594 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.560621 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.560635 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.594169 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.594240 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594412 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594433 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594445 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594458 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594506 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594525 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:34.594506101 +0000 UTC m=+84.466004153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594526 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594573 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:34.594564362 +0000 UTC m=+84.466062424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.663780 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.663880 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.663908 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.663944 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.663964 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.766945 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.767012 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.767023 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.767068 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.767080 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.868205 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.868407 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.869226 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.869349 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.869387 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.869406 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.869421 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.880490 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.891779 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.906192 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.918331 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.931271 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.943011 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.960788 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.971280 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.971353 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.971369 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.971413 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.971437 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.975562 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.989284 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.009644 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.023439 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.037525 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.052424 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.061719 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.069210 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.073327 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.073397 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.073407 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.073427 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.073439 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.078959 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.096202 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.106409 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.120727 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.175648 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.175696 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.175707 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.175725 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.175737 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.278379 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.279263 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.279524 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.279574 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.279603 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.382870 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.383265 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.383334 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.383361 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.383378 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.486304 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.486362 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.486377 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.486396 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.486410 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.588929 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.588980 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.588989 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.589003 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.589030 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.691649 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.691698 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.691707 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.691722 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.691734 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.706250 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:31 crc kubenswrapper[5103]: E0130 00:11:31.706347 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:31 crc kubenswrapper[5103]: E0130 00:11:31.706400 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:35.706386693 +0000 UTC m=+85.577884745 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.794318 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.794383 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.794402 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.794420 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.794432 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.867713 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.867713 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.867962 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:31 crc kubenswrapper[5103]: E0130 00:11:31.867966 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:31 crc kubenswrapper[5103]: E0130 00:11:31.868105 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:31 crc kubenswrapper[5103]: E0130 00:11:31.868172 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.874591 5103 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.896816 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.896871 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.896887 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.896911 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.896929 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.998813 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.999037 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.999074 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.999092 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.999107 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.101261 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.101304 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.101316 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.101528 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.101541 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.109749 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:32 crc kubenswrapper[5103]: E0130 00:11:32.110124 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:36.110081023 +0000 UTC m=+85.981579115 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.204907 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.204968 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.204983 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.205005 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.205017 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.307862 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.307931 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.307950 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.307977 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.307997 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.410741 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.410816 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.410830 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.410879 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.410892 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.513335 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.513408 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.513424 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.513442 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.513454 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.615957 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.616009 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.616018 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.616035 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.616062 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.718578 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.718660 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.718673 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.718695 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.718728 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.820869 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.820913 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.820924 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.820942 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.820953 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.868402 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:32 crc kubenswrapper[5103]: E0130 00:11:32.868922 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.923022 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.923091 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.923101 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.923117 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.923128 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.025939 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.026011 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.026026 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.026070 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.026087 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.129171 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.129246 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.129261 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.129285 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.129320 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.232531 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.232579 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.232589 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.232604 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.232615 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.335557 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.335630 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.335644 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.335674 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.335694 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.438735 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.438796 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.438811 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.438838 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.438850 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.542318 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.542387 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.542402 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.542427 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.542443 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.645424 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.645492 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.645508 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.645531 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.645545 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.747541 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.747593 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.747605 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.747623 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.747636 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.850058 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.850116 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.850127 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.850152 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.850165 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.871877 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.871928 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.871879 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:33 crc kubenswrapper[5103]: E0130 00:11:33.872111 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:33 crc kubenswrapper[5103]: E0130 00:11:33.872223 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:33 crc kubenswrapper[5103]: E0130 00:11:33.872352 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.953176 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.953267 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.953304 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.953410 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.953439 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.055744 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.055812 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.055829 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.055894 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.055914 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.158496 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.158554 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.158566 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.158583 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.158595 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.218431 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.219534 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.219793 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.260664 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.260749 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.260775 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.260807 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.260832 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.364213 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.364527 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.364686 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.364886 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.365016 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.468327 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.468455 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.468488 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.468521 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.468540 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.542545 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.542650 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.542782 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.542870 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:42.542845704 +0000 UTC m=+92.414343796 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.543538 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.543597 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:42.543579702 +0000 UTC m=+92.415077784 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.570863 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.570928 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.570949 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.570975 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.570993 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.643934 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.644024 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.644235 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.644278 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.644296 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.644389 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:42.644363699 +0000 UTC m=+92.515861781 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.644922 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.644990 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.645015 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.645209 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:42.645170869 +0000 UTC m=+92.516668951 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.674029 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.674116 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.674135 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.674161 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.674186 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.777209 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.777283 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.777302 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.777326 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.777461 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.867511 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.867762 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.880361 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.880435 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.880460 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.880492 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.880516 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.983509 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.983985 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.984239 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.984383 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.984554 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.087705 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.088702 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.088887 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.089108 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.089246 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.192456 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.192529 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.192548 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.192577 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.192597 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.294691 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.294746 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.294762 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.294785 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.294803 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.397348 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.397391 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.397400 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.397419 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.397439 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.500221 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.500272 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.500281 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.500296 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.500307 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.602631 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.602692 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.602708 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.602728 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.602741 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.704826 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.704911 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.704954 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.704986 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.705009 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.759362 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:35 crc kubenswrapper[5103]: E0130 00:11:35.759577 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:35 crc kubenswrapper[5103]: E0130 00:11:35.759707 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:43.759679526 +0000 UTC m=+93.631177608 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.807948 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.808012 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.808024 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.808043 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.808076 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.868313 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.868388 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.868467 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:35 crc kubenswrapper[5103]: E0130 00:11:35.868590 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:35 crc kubenswrapper[5103]: E0130 00:11:35.868754 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:35 crc kubenswrapper[5103]: E0130 00:11:35.868939 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.911034 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.911157 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.911191 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.911225 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.911248 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.013754 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.013852 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.013873 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.013903 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.013924 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.116949 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.117011 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.117028 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.117081 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.117125 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.165775 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:36 crc kubenswrapper[5103]: E0130 00:11:36.166142 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:44.166110043 +0000 UTC m=+94.037608135 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.219803 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.219891 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.219911 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.219936 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.219954 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.322211 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.322299 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.322327 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.322360 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.322383 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.425332 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.425401 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.425423 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.425447 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.425465 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.528715 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.528803 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.528828 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.528859 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.528878 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.631798 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.631846 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.631858 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.631878 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.631891 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.734789 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.734902 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.734922 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.734951 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.734970 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.837516 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.837574 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.837587 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.837617 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.837630 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.867848 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:36 crc kubenswrapper[5103]: E0130 00:11:36.868027 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.940040 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.940146 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.940165 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.940191 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.940212 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.042926 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.043017 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.043105 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.043142 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.043165 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.091212 5103 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.146333 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.146413 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.146437 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.146470 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.146495 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.249211 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.249278 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.249295 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.249322 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.249341 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.383324 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.383407 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.383433 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.383486 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.383513 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.486458 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.486681 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.486717 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.486752 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.486776 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.591538 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.591622 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.591651 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.591685 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.591710 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.694178 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.694282 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.694316 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.694351 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.694378 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.797212 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.797280 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.797299 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.797325 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.797345 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.867482 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:37 crc kubenswrapper[5103]: E0130 00:11:37.867675 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.867697 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:37 crc kubenswrapper[5103]: E0130 00:11:37.867857 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.867905 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:37 crc kubenswrapper[5103]: E0130 00:11:37.868002 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.900101 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.900179 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.900201 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.900227 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.900248 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.003651 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.003738 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.003758 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.003788 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.003809 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.106730 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.106803 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.106826 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.106857 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.106881 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.210485 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.210587 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.210615 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.210650 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.210674 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.313442 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.313508 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.313530 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.313555 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.313707 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.416238 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.416309 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.416334 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.416360 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.416381 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.518613 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.518683 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.518703 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.518729 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.518750 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.621560 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.621630 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.621649 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.621679 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.621700 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.724268 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.724328 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.724345 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.724370 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.724391 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.827538 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.827582 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.827596 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.827612 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.827623 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.867700 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:38 crc kubenswrapper[5103]: E0130 00:11:38.867913 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.930149 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.930218 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.930235 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.930264 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.930283 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.033153 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.033247 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.033271 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.033306 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.033329 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.136495 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.136894 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.137025 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.137212 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.137339 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.240356 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.240430 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.240450 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.240478 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.240504 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.343021 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.343119 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.343139 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.343171 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.343195 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.445578 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.445649 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.445668 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.445693 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.445712 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.549300 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.549374 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.549395 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.549426 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.549444 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.652300 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.652403 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.652562 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.652642 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.652667 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.755161 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.755209 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.755220 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.755238 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.755249 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.857741 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.857824 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.857850 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.857882 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.857904 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.867563 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.867626 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.867564 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:39 crc kubenswrapper[5103]: E0130 00:11:39.867807 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:39 crc kubenswrapper[5103]: E0130 00:11:39.867975 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:39 crc kubenswrapper[5103]: E0130 00:11:39.868779 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.960264 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.960312 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.960321 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.960337 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.960347 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.062637 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.062687 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.062700 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.062725 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.062737 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.158671 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.158718 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.158735 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.158755 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.158767 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.170477 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.173870 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.173918 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.173931 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.173950 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.173961 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.184456 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.188031 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.188093 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.188129 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.188145 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.188153 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.199096 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.202693 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.202764 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.202785 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.202813 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.202832 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.214068 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.218711 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.218745 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.218754 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.218767 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.218779 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.226661 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.226781 5103 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.228018 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.228079 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.228092 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.228109 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.228121 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.305795 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" event={"ID":"7d918c96-a16b-4836-ac5a-83c3388f5468","Type":"ContainerStarted","Data":"031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.308432 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.320133 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.329855 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.329914 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.329931 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.329949 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.329962 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.334120 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.344044 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.352793 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.369668 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.380346 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.390775 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.399660 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.406298 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.412436 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.420011 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.432536 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.432595 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.432607 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.432623 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.432633 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.433171 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.439667 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.447961 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.456278 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.465738 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.483027 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.493398 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.503830 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.534820 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.534859 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.534869 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.534886 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.534897 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.638431 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.638934 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.638952 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.638978 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.638996 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.741761 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.741827 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.741847 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.741874 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.741894 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.844997 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.845094 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.845115 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.845142 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.845163 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.867790 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.868284 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.887749 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.901729 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.919714 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.932914 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.947684 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.947734 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.947745 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.947761 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.947774 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.963696 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.983874 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.998039 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.011566 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.025535 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.039409 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.051023 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.051185 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.051198 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.051216 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.051230 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.051778 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.069619 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.079958 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.093468 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.105341 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.117700 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.130367 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.140173 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.153451 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.154076 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.154124 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.154136 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.154153 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.154165 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.257371 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.257425 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.257445 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.257468 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.257487 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.313117 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"ca66bc51f5182ad2848199e1ce4c53eace8150ce3903b340b402f6cc7f00ed42"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.314971 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" event={"ID":"7d918c96-a16b-4836-ac5a-83c3388f5468","Type":"ContainerStarted","Data":"6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.317162 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" exitCode=0 Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.317275 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.317330 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.317347 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.324459 5103 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.325399 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.333067 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.345483 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.358923 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.359147 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.359202 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.359213 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.359238 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.359253 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.372560 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.385884 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.397764 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.417794 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.435148 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.452599 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.461636 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.461687 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.461700 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.461721 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.461735 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.473281 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.484934 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.493465 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.501878 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.527305 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.541687 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.562477 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.563522 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.563569 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.563583 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.563600 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.563612 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.572494 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.581351 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.666159 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.666217 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.666234 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.666257 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.666275 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.768576 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.768665 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.768694 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.768724 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.768750 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.867730 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.868360 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:41 crc kubenswrapper[5103]: E0130 00:11:41.868507 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:41 crc kubenswrapper[5103]: E0130 00:11:41.869698 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.869837 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:41 crc kubenswrapper[5103]: E0130 00:11:41.870020 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.873686 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.873743 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.873764 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.873791 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.873817 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.977238 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.977309 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.977327 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.977354 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.977373 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.080244 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.080295 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.080313 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.080332 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.080345 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.183726 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.183802 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.183828 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.183861 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.183886 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.285808 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.285852 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.285864 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.285881 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.285893 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.323967 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"47d4649f628f9ff08c1eae857ce8b6a70f66ec474c9229aafcc4d26442b014bb"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.326971 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"f9d4456cff54a878b20f8da7f00f13f75d8988ff57db65c6a3b57af33f1e7207"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.329895 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.331558 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bs8rz" event={"ID":"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e","Type":"ContainerStarted","Data":"d163b6e7c84eb8b849bb1ee928e487432bb5324c921125ffa1574c6bae1285b3"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.387714 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.387766 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.387779 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.387802 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.387817 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.489841 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.489896 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.489913 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.489937 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.489955 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.555225 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.555345 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.555427 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.555484 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.555548 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:58.555521256 +0000 UTC m=+108.427019338 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.555580 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:58.555566437 +0000 UTC m=+108.427064519 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.580793 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.592255 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.592306 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.592320 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.592340 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.592355 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.594640 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.608726 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.632254 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.645291 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.656940 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.657017 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657173 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657209 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657220 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657298 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:58.657270286 +0000 UTC m=+108.528768338 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657767 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657799 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657811 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657867 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:58.65785058 +0000 UTC m=+108.529348632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.658189 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.668496 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.681025 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.693153 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.694028 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.694108 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.694123 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.694147 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.694161 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.704313 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.717333 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.728978 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.760119 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.773353 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.783085 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.792283 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.795925 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.795981 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.795999 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.796022 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.796038 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.800868 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://d163b6e7c84eb8b849bb1ee928e487432bb5324c921125ffa1574c6bae1285b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.808129 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.814712 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.822084 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://d163b6e7c84eb8b849bb1ee928e487432bb5324c921125ffa1574c6bae1285b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.829274 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.837094 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.850916 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.858883 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.867419 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.867779 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.875420 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.886521 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.898723 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.898790 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.898804 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.898826 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.898842 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.911807 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.926892 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.934813 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.947722 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.956585 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.967943 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9d4456cff54a878b20f8da7f00f13f75d8988ff57db65c6a3b57af33f1e7207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ca66bc51f5182ad2848199e1ce4c53eace8150ce3903b340b402f6cc7f00ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.978963 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.987677 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.001638 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.001688 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.001700 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.001718 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.001730 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.004439 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.014663 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.024271 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.033431 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.104146 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.104193 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.104205 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.104223 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.104236 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.205946 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.205991 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.206004 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.206020 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.206030 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.308243 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.308294 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.308306 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.308324 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.308337 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.338891 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"5bc4f366d49d119d07ce33722a1d708340e6808bb491c23b9a7fa21fa8df1420"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.343187 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.343227 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.346260 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swfns" event={"ID":"a7dd7e02-4357-4643-8c23-2fb57ba70405","Type":"ContainerStarted","Data":"1924d7799e7a22d8b03bdfa9e3bf703744981a899ee974cc86920ae8c5fcbbcb"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.347694 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"c00427c4884245a18d4fdb095bd973b778a49a0f7191904be6dec15bdd672466"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.349943 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.350085 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"8380f1a09b9ebf3cdb88be129a121cf08d82551f2019e61e3b89fbec5c6f12b3"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.367459 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.380128 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.389898 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.400764 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.406991 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://d163b6e7c84eb8b849bb1ee928e487432bb5324c921125ffa1574c6bae1285b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.410744 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.410860 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.410918 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.410991 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.411076 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.415140 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.424028 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.442688 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.454244 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.466195 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.478989 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.512710 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.512781 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.512793 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.512814 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.512842 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.547190 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" podStartSLOduration=70.54717141 podStartE2EDuration="1m10.54717141s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.546663698 +0000 UTC m=+93.418161760" watchObservedRunningTime="2026-01-30 00:11:43.54717141 +0000 UTC m=+93.418669462" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.606883 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=17.606847779 podStartE2EDuration="17.606847779s" podCreationTimestamp="2026-01-30 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.593325931 +0000 UTC m=+93.464824003" watchObservedRunningTime="2026-01-30 00:11:43.606847779 +0000 UTC m=+93.478345831" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.614656 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.614705 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.614721 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.614739 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.614750 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.636994 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.636977000999998 podStartE2EDuration="17.636977001s" podCreationTimestamp="2026-01-30 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.635758481 +0000 UTC m=+93.507256553" watchObservedRunningTime="2026-01-30 00:11:43.636977001 +0000 UTC m=+93.508475053" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.702903 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-swfns" podStartSLOduration=70.702886341 podStartE2EDuration="1m10.702886341s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.693142954 +0000 UTC m=+93.564641016" watchObservedRunningTime="2026-01-30 00:11:43.702886341 +0000 UTC m=+93.574384393" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.703107 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podStartSLOduration=70.703103626 podStartE2EDuration="1m10.703103626s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.70287015 +0000 UTC m=+93.574368212" watchObservedRunningTime="2026-01-30 00:11:43.703103626 +0000 UTC m=+93.574601678" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.717207 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.717246 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.717256 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.717271 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.717282 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.731813 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=17.731796053 podStartE2EDuration="17.731796053s" podCreationTimestamp="2026-01-30 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.731571417 +0000 UTC m=+93.603069469" watchObservedRunningTime="2026-01-30 00:11:43.731796053 +0000 UTC m=+93.603294105" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.750814 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=17.750800384 podStartE2EDuration="17.750800384s" podCreationTimestamp="2026-01-30 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.749580544 +0000 UTC m=+93.621078596" watchObservedRunningTime="2026-01-30 00:11:43.750800384 +0000 UTC m=+93.622298436" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.769447 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:43 crc kubenswrapper[5103]: E0130 00:11:43.769648 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:43 crc kubenswrapper[5103]: E0130 00:11:43.769745 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:59.769721883 +0000 UTC m=+109.641219995 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.817026 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bs8rz" podStartSLOduration=71.817008301 podStartE2EDuration="1m11.817008301s" podCreationTimestamp="2026-01-30 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.799606489 +0000 UTC m=+93.671104551" watchObservedRunningTime="2026-01-30 00:11:43.817008301 +0000 UTC m=+93.688506353" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.819214 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.819262 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.819276 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.819297 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.819312 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.868233 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.868299 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.868260 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:43 crc kubenswrapper[5103]: E0130 00:11:43.868389 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:43 crc kubenswrapper[5103]: E0130 00:11:43.868476 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:43 crc kubenswrapper[5103]: E0130 00:11:43.868718 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.921320 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.921371 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.921386 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.921404 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.921415 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.024110 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.024227 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.024252 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.024284 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.024312 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.127043 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.127137 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.127155 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.127184 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.127202 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.173542 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:44 crc kubenswrapper[5103]: E0130 00:11:44.173685 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:00.17366524 +0000 UTC m=+110.045163292 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.229818 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.229904 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.229928 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.229962 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.229986 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.332521 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.332622 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.332664 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.332699 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.332727 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.355020 5103 generic.go:358] "Generic (PLEG): container finished" podID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" containerID="5bc4f366d49d119d07ce33722a1d708340e6808bb491c23b9a7fa21fa8df1420" exitCode=0 Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.355123 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerDied","Data":"5bc4f366d49d119d07ce33722a1d708340e6808bb491c23b9a7fa21fa8df1420"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.362094 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.434497 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.434548 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.434561 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.434580 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.434592 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.537493 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.537560 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.537572 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.537591 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.537603 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.639953 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.640025 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.640097 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.640128 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.640148 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.743368 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.743440 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.743453 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.743472 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.743483 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.846418 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.846458 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.846466 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.846481 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.846493 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.868460 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:44 crc kubenswrapper[5103]: E0130 00:11:44.868761 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.949539 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.949602 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.949617 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.949639 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.949657 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.051839 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.051885 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.051894 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.051907 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.051916 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.154444 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.154582 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.154590 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.154604 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.154613 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.257988 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.258028 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.258036 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.258063 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.258073 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.360846 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.360889 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.360898 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.360913 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.360925 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.463370 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.463423 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.463436 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.463455 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.463467 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.566175 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.566233 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.566245 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.566268 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.566284 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.668236 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.668293 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.668305 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.668324 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.668337 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.771787 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.771830 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.771844 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.771864 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.771879 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.867809 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.867811 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:45 crc kubenswrapper[5103]: E0130 00:11:45.868244 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.868297 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:45 crc kubenswrapper[5103]: E0130 00:11:45.868430 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:45 crc kubenswrapper[5103]: E0130 00:11:45.867980 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.874410 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.874451 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.874462 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.874479 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.874490 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.976901 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.976976 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.976995 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.977026 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.977089 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.080287 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.080330 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.080341 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.080361 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.080374 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.182585 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.183264 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.183286 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.183312 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.183331 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.288525 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.288577 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.288589 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.288609 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.288623 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.370242 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-226mj" event={"ID":"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef","Type":"ContainerStarted","Data":"a0930183f1f4292ce8a16800710c911eabe584c23ad6a0c11628c72ca3f07140"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.375129 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"1f2cf4a25105d9ac9f14e0cee69917668cb0c2b30471ac5ca7cb5fb06f4fa4e0"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.379229 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.386802 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-226mj" podStartSLOduration=73.386788938 podStartE2EDuration="1m13.386788938s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:46.386518002 +0000 UTC m=+96.258016064" watchObservedRunningTime="2026-01-30 00:11:46.386788938 +0000 UTC m=+96.258286990" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.391728 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.391855 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.391955 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.392031 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.392129 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.494981 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.495032 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.495061 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.495082 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.495097 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.599069 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.599135 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.599158 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.599185 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.599198 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.701438 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.701481 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.701490 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.701505 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.701515 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.803511 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.803594 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.803611 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.803633 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.803651 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.867674 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:46 crc kubenswrapper[5103]: E0130 00:11:46.868122 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.868465 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:11:46 crc kubenswrapper[5103]: E0130 00:11:46.868866 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.905189 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.905275 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.905324 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.905348 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.905364 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.007823 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.007895 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.007912 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.007943 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.007961 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.110759 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.110823 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.110840 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.110865 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.110885 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.212990 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.213082 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.213095 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.213118 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.213129 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.315303 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.315366 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.315379 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.315401 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.315413 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.385209 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"d70a15da4267dab2faca43e16238c605ba7c8b5aba4f4f76d7eb2342b799a2e0"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.417801 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.417856 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.417868 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.417888 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.417918 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.519764 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.519811 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.519827 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.519849 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.519867 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.621726 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.621802 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.621829 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.621862 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.621885 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.724770 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.724822 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.724835 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.724853 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.724866 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.826954 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.827043 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.827112 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.827143 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.827164 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.867584 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.867629 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.867761 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:47 crc kubenswrapper[5103]: E0130 00:11:47.867776 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:47 crc kubenswrapper[5103]: E0130 00:11:47.867996 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:47 crc kubenswrapper[5103]: E0130 00:11:47.868201 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.930365 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.930430 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.930449 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.930477 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.930570 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.033236 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.033304 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.033325 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.033352 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.033371 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.136315 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.136401 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.136429 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.136462 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.136485 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.238773 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.238821 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.238831 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.238847 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.238859 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.341329 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.341847 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.341863 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.341882 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.341895 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.391202 5103 generic.go:358] "Generic (PLEG): container finished" podID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" containerID="1f2cf4a25105d9ac9f14e0cee69917668cb0c2b30471ac5ca7cb5fb06f4fa4e0" exitCode=0 Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.391256 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerDied","Data":"1f2cf4a25105d9ac9f14e0cee69917668cb0c2b30471ac5ca7cb5fb06f4fa4e0"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.443463 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.443521 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.443532 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.443548 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.443557 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.546278 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.546348 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.546367 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.546395 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.546417 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.649100 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.649185 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.649210 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.649241 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.649260 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.751511 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.751584 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.751603 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.751632 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.751653 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.854625 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.854757 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.854776 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.854807 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.854826 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.873706 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:48 crc kubenswrapper[5103]: E0130 00:11:48.873847 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.957449 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.957524 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.957544 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.957571 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.957589 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.059889 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.059975 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.059995 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.060023 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.060043 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.162915 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.162974 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.162991 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.163017 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.163076 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.265742 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.265817 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.265836 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.265862 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.265881 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.368812 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.368905 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.368932 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.368969 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.368994 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.471696 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.471767 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.471785 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.471810 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.471831 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.575125 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.575185 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.575197 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.575217 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.575231 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.678715 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.678799 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.678826 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.678863 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.678889 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.781955 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.782022 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.782075 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.782099 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.782117 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.868064 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:49 crc kubenswrapper[5103]: E0130 00:11:49.868261 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.868423 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:49 crc kubenswrapper[5103]: E0130 00:11:49.868651 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.868700 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:49 crc kubenswrapper[5103]: E0130 00:11:49.868773 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.884683 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.884789 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.884807 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.884832 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.884849 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.987388 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.987460 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.987473 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.987501 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.987516 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.090417 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.090478 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.090488 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.090511 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.090521 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.193203 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.193269 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.193282 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.193301 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.193317 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.295594 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.295651 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.295667 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.295692 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.295708 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.400797 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.400863 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.400873 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.400907 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.400919 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.503732 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.504240 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.504250 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.504266 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.504277 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.588816 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.588850 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.588862 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.588880 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.588894 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.631918 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h"] Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.640846 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.644062 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.644110 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.644087 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.645315 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.770315 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f5044c8-5ef7-4573-b468-23f35b0a9776-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.770468 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f5044c8-5ef7-4573-b468-23f35b0a9776-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.770629 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f5044c8-5ef7-4573-b468-23f35b0a9776-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.770775 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5f5044c8-5ef7-4573-b468-23f35b0a9776-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.770918 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5f5044c8-5ef7-4573-b468-23f35b0a9776-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.853507 5103 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.862741 5103 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.868044 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:50 crc kubenswrapper[5103]: E0130 00:11:50.868213 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872390 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f5044c8-5ef7-4573-b468-23f35b0a9776-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872472 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5f5044c8-5ef7-4573-b468-23f35b0a9776-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872498 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5f5044c8-5ef7-4573-b468-23f35b0a9776-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872569 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5f5044c8-5ef7-4573-b468-23f35b0a9776-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872620 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f5044c8-5ef7-4573-b468-23f35b0a9776-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872637 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f5044c8-5ef7-4573-b468-23f35b0a9776-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872697 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5f5044c8-5ef7-4573-b468-23f35b0a9776-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.874135 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f5044c8-5ef7-4573-b468-23f35b0a9776-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.881917 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f5044c8-5ef7-4573-b468-23f35b0a9776-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.888482 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f5044c8-5ef7-4573-b468-23f35b0a9776-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.965119 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: W0130 00:11:50.976931 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f5044c8_5ef7_4573_b468_23f35b0a9776.slice/crio-5b957ef0732a85b3e4b0ce8b24a29a2dc0cbeed0a7035bc2a656228784c04a8a WatchSource:0}: Error finding container 5b957ef0732a85b3e4b0ce8b24a29a2dc0cbeed0a7035bc2a656228784c04a8a: Status 404 returned error can't find the container with id 5b957ef0732a85b3e4b0ce8b24a29a2dc0cbeed0a7035bc2a656228784c04a8a Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.413107 5103 generic.go:358] "Generic (PLEG): container finished" podID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" containerID="c3c72b6d4a189f1a50f6897c8bae426da23207df7906db7f2c038123cb36e44d" exitCode=0 Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.413236 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerDied","Data":"c3c72b6d4a189f1a50f6897c8bae426da23207df7906db7f2c038123cb36e44d"} Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.424246 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.425968 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" event={"ID":"5f5044c8-5ef7-4573-b468-23f35b0a9776","Type":"ContainerStarted","Data":"5b957ef0732a85b3e4b0ce8b24a29a2dc0cbeed0a7035bc2a656228784c04a8a"} Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.660365 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.660427 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.660447 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.708410 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podStartSLOduration=78.70838096 podStartE2EDuration="1m18.70838096s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:51.707784506 +0000 UTC m=+101.579282588" watchObservedRunningTime="2026-01-30 00:11:51.70838096 +0000 UTC m=+101.579879032" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.749033 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.749711 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.867855 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:51 crc kubenswrapper[5103]: E0130 00:11:51.868175 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.868035 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:51 crc kubenswrapper[5103]: E0130 00:11:51.868391 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.868221 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:51 crc kubenswrapper[5103]: E0130 00:11:51.868582 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:52 crc kubenswrapper[5103]: I0130 00:11:52.432722 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"5364d6229c6502e47e15e4bde438c16569a5aa76c2b433051ad6651c6f257d58"} Jan 30 00:11:52 crc kubenswrapper[5103]: I0130 00:11:52.434592 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" event={"ID":"5f5044c8-5ef7-4573-b468-23f35b0a9776","Type":"ContainerStarted","Data":"69589db04036bfeab22e07f3489d4c166326d42aa7c5c626379206f4bba0b2ea"} Jan 30 00:11:52 crc kubenswrapper[5103]: I0130 00:11:52.867972 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:52 crc kubenswrapper[5103]: E0130 00:11:52.868221 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:53 crc kubenswrapper[5103]: I0130 00:11:53.442299 5103 generic.go:358] "Generic (PLEG): container finished" podID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" containerID="5364d6229c6502e47e15e4bde438c16569a5aa76c2b433051ad6651c6f257d58" exitCode=0 Jan 30 00:11:53 crc kubenswrapper[5103]: I0130 00:11:53.442371 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerDied","Data":"5364d6229c6502e47e15e4bde438c16569a5aa76c2b433051ad6651c6f257d58"} Jan 30 00:11:53 crc kubenswrapper[5103]: I0130 00:11:53.466688 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" podStartSLOduration=81.466668636 podStartE2EDuration="1m21.466668636s" podCreationTimestamp="2026-01-30 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:52.482256237 +0000 UTC m=+102.353754309" watchObservedRunningTime="2026-01-30 00:11:53.466668636 +0000 UTC m=+103.338166698" Jan 30 00:11:53 crc kubenswrapper[5103]: I0130 00:11:53.867278 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:53 crc kubenswrapper[5103]: I0130 00:11:53.867335 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:53 crc kubenswrapper[5103]: I0130 00:11:53.867283 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:53 crc kubenswrapper[5103]: E0130 00:11:53.867439 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:53 crc kubenswrapper[5103]: E0130 00:11:53.867576 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:53 crc kubenswrapper[5103]: E0130 00:11:53.867648 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:54 crc kubenswrapper[5103]: I0130 00:11:54.451744 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"3130c46399919346bb8566f5e47a88474c211d6208f3a2f8731a9ac4957000e6"} Jan 30 00:11:54 crc kubenswrapper[5103]: I0130 00:11:54.867732 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:54 crc kubenswrapper[5103]: E0130 00:11:54.867872 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:55 crc kubenswrapper[5103]: I0130 00:11:55.463440 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vsrcq"] Jan 30 00:11:55 crc kubenswrapper[5103]: I0130 00:11:55.463659 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:55 crc kubenswrapper[5103]: E0130 00:11:55.463819 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:55 crc kubenswrapper[5103]: I0130 00:11:55.867886 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:55 crc kubenswrapper[5103]: E0130 00:11:55.867990 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:55 crc kubenswrapper[5103]: I0130 00:11:55.868440 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:55 crc kubenswrapper[5103]: E0130 00:11:55.868497 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:56 crc kubenswrapper[5103]: I0130 00:11:56.868314 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:56 crc kubenswrapper[5103]: I0130 00:11:56.868379 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:56 crc kubenswrapper[5103]: E0130 00:11:56.868611 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:56 crc kubenswrapper[5103]: E0130 00:11:56.868989 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:57 crc kubenswrapper[5103]: I0130 00:11:57.867557 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:57 crc kubenswrapper[5103]: E0130 00:11:57.867765 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:57 crc kubenswrapper[5103]: I0130 00:11:57.867801 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:57 crc kubenswrapper[5103]: E0130 00:11:57.867918 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.576990 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.577147 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.577200 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.577295 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.577318 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.577287547 +0000 UTC m=+140.448785629 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.577394 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.577365939 +0000 UTC m=+140.448864031 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.679031 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.679153 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679381 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679408 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679429 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679510 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.679486808 +0000 UTC m=+140.550984890 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679509 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679592 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679622 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679756 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.679717644 +0000 UTC m=+140.551215736 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.868125 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.868282 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.869532 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.869733 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.869830 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.870541 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:59 crc kubenswrapper[5103]: I0130 00:11:59.473590 5103 generic.go:358] "Generic (PLEG): container finished" podID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" containerID="3130c46399919346bb8566f5e47a88474c211d6208f3a2f8731a9ac4957000e6" exitCode=0 Jan 30 00:11:59 crc kubenswrapper[5103]: I0130 00:11:59.473671 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerDied","Data":"3130c46399919346bb8566f5e47a88474c211d6208f3a2f8731a9ac4957000e6"} Jan 30 00:11:59 crc kubenswrapper[5103]: I0130 00:11:59.794955 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:59 crc kubenswrapper[5103]: E0130 00:11:59.795153 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:59 crc kubenswrapper[5103]: E0130 00:11:59.795226 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.795210604 +0000 UTC m=+141.666708656 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:59 crc kubenswrapper[5103]: I0130 00:11:59.868228 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:59 crc kubenswrapper[5103]: E0130 00:11:59.868365 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:59 crc kubenswrapper[5103]: I0130 00:11:59.868472 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:59 crc kubenswrapper[5103]: E0130 00:11:59.868677 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.198675 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:00 crc kubenswrapper[5103]: E0130 00:12:00.198993 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.198951276 +0000 UTC m=+142.070449378 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.480494 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"8cfe21d64c3f3bd883b20a431e0d46df831383bde83f5c38c04c36b9c506f63b"} Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.867569 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.869856 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:12:00 crc kubenswrapper[5103]: E0130 00:12:00.870028 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:00 crc kubenswrapper[5103]: E0130 00:12:00.870221 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.907590 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.907876 5103 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.956875 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-clmhf"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.006107 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-spmxr"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.006653 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.016253 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-6z46s"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.027718 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.028638 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.028929 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.029220 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.029342 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.029440 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.029706 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.030149 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.030428 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.031594 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.039167 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.109613 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-config\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.109885 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-audit\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.109965 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-client-ca\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110067 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxzw\" (UniqueName: \"kubernetes.io/projected/d3abf3af-b96a-44fa-bd40-1c92bab19b92-kube-api-access-4fxzw\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110161 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e9100695-b78d-4b2f-9cea-9d022064c792-node-pullsecrets\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110238 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110317 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-encryption-config\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110394 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-serving-cert\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110497 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3abf3af-b96a-44fa-bd40-1c92bab19b92-tmp\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110575 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9100695-b78d-4b2f-9cea-9d022064c792-audit-dir\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110650 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-etcd-client\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110721 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-config\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110793 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-image-import-ca\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110866 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110946 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnphx\" (UniqueName: \"kubernetes.io/projected/e9100695-b78d-4b2f-9cea-9d022064c792-kube-api-access-jnphx\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.111026 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.111124 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abf3af-b96a-44fa-bd40-1c92bab19b92-serving-cert\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.155022 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-qsf67"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.155172 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.155679 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.161809 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.162089 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.162230 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.162467 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.162612 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.162784 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.162938 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.163072 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.165281 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.165486 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.165786 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.166102 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.166284 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.166460 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.166704 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.174948 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.179147 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-2xrjj"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.192571 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.192783 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.192805 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.196222 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.196575 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.196591 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.196661 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.200345 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.200513 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.200638 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.200761 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.201004 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.201367 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.213107 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-config\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.214288 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-config\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.214985 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-audit\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215094 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-client-ca\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215132 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-etcd-serving-ca\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215167 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-serving-cert\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215191 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-encryption-config\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215337 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxzw\" (UniqueName: \"kubernetes.io/projected/d3abf3af-b96a-44fa-bd40-1c92bab19b92-kube-api-access-4fxzw\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215383 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e9100695-b78d-4b2f-9cea-9d022064c792-node-pullsecrets\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215409 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215540 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-encryption-config\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215554 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-audit\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217011 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-serving-cert\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.216344 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e9100695-b78d-4b2f-9cea-9d022064c792-node-pullsecrets\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.216504 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-client-ca\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215969 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217330 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3abf3af-b96a-44fa-bd40-1c92bab19b92-tmp\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217376 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9100695-b78d-4b2f-9cea-9d022064c792-audit-dir\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217410 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-trusted-ca-bundle\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217436 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-audit-policies\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217456 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vcrb\" (UniqueName: \"kubernetes.io/projected/a0ff7eb1-7b00-4318-936e-30862acd97e5-kube-api-access-6vcrb\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217490 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-etcd-client\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217860 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9100695-b78d-4b2f-9cea-9d022064c792-audit-dir\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217884 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3abf3af-b96a-44fa-bd40-1c92bab19b92-tmp\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217896 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-config\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217956 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-image-import-ca\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217996 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.218019 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-etcd-client\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.218061 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jnphx\" (UniqueName: \"kubernetes.io/projected/e9100695-b78d-4b2f-9cea-9d022064c792-kube-api-access-jnphx\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.218079 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0ff7eb1-7b00-4318-936e-30862acd97e5-audit-dir\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.218117 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.218134 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abf3af-b96a-44fa-bd40-1c92bab19b92-serving-cert\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.219617 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-config\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.220155 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.220492 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.220759 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-image-import-ca\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.224864 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-encryption-config\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.228473 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-serving-cert\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.228646 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abf3af-b96a-44fa-bd40-1c92bab19b92-serving-cert\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.229687 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-etcd-client\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.235238 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-4rfkh"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.235454 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.236005 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxzw\" (UniqueName: \"kubernetes.io/projected/d3abf3af-b96a-44fa-bd40-1c92bab19b92-kube-api-access-4fxzw\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.237287 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnphx\" (UniqueName: \"kubernetes.io/projected/e9100695-b78d-4b2f-9cea-9d022064c792-kube-api-access-jnphx\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.237523 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.238077 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.238273 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.238909 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.238916 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.239220 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.319759 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-etcd-serving-ca\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.320369 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-serving-cert\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.320516 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndb68\" (UniqueName: \"kubernetes.io/projected/f80439cc-c38d-4210-a203-f478704d9dcd-kube-api-access-ndb68\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.320578 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-serving-cert\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.320608 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-encryption-config\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321416 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f80439cc-c38d-4210-a203-f478704d9dcd-machine-approver-tls\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321458 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-tmp\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321482 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvv5\" (UniqueName: \"kubernetes.io/projected/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-kube-api-access-qgvv5\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321536 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-etcd-serving-ca\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321916 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f80439cc-c38d-4210-a203-f478704d9dcd-config\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321967 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-trusted-ca-bundle\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321995 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-audit-policies\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322017 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6vcrb\" (UniqueName: \"kubernetes.io/projected/a0ff7eb1-7b00-4318-936e-30862acd97e5-kube-api-access-6vcrb\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322084 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kdfs\" (UniqueName: \"kubernetes.io/projected/91703ab7-2f05-4831-8200-85210adf830b-kube-api-access-7kdfs\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322109 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-config\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322179 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-etcd-client\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322207 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0ff7eb1-7b00-4318-936e-30862acd97e5-audit-dir\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322245 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-client-ca\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322303 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91703ab7-2f05-4831-8200-85210adf830b-serving-cert\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322329 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91703ab7-2f05-4831-8200-85210adf830b-available-featuregates\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322366 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f80439cc-c38d-4210-a203-f478704d9dcd-auth-proxy-config\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322403 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-trusted-ca-bundle\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322786 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0ff7eb1-7b00-4318-936e-30862acd97e5-audit-dir\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.323254 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-audit-policies\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.324460 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-serving-cert\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.326914 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-etcd-client\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.327193 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.329006 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-encryption-config\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.346195 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vcrb\" (UniqueName: \"kubernetes.io/projected/a0ff7eb1-7b00-4318-936e-30862acd97e5-kube-api-access-6vcrb\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.348063 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.348245 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.356075 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.356209 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.356887 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.357261 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.357608 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.372341 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424089 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-serving-cert\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424157 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ndb68\" (UniqueName: \"kubernetes.io/projected/f80439cc-c38d-4210-a203-f478704d9dcd-kube-api-access-ndb68\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424361 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-serving-cert\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424481 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f80439cc-c38d-4210-a203-f478704d9dcd-machine-approver-tls\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424519 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-tmp\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424535 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qgvv5\" (UniqueName: \"kubernetes.io/projected/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-kube-api-access-qgvv5\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424585 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f80439cc-c38d-4210-a203-f478704d9dcd-config\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424661 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7kdfs\" (UniqueName: \"kubernetes.io/projected/91703ab7-2f05-4831-8200-85210adf830b-kube-api-access-7kdfs\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424681 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-config\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424745 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-client-ca\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424766 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91703ab7-2f05-4831-8200-85210adf830b-serving-cert\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424782 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91703ab7-2f05-4831-8200-85210adf830b-available-featuregates\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424819 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-trusted-ca\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424840 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f80439cc-c38d-4210-a203-f478704d9dcd-auth-proxy-config\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424863 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lw4q\" (UniqueName: \"kubernetes.io/projected/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-kube-api-access-6lw4q\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424888 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-config\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.425319 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-tmp\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.425847 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91703ab7-2f05-4831-8200-85210adf830b-available-featuregates\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.425965 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-client-ca\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.425989 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f80439cc-c38d-4210-a203-f478704d9dcd-auth-proxy-config\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.428228 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-config\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.429581 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91703ab7-2f05-4831-8200-85210adf830b-serving-cert\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.430306 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f80439cc-c38d-4210-a203-f478704d9dcd-config\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.431460 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-serving-cert\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.440349 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f80439cc-c38d-4210-a203-f478704d9dcd-machine-approver-tls\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.444899 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kdfs\" (UniqueName: \"kubernetes.io/projected/91703ab7-2f05-4831-8200-85210adf830b-kube-api-access-7kdfs\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.446293 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndb68\" (UniqueName: \"kubernetes.io/projected/f80439cc-c38d-4210-a203-f478704d9dcd-kube-api-access-ndb68\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.456438 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgvv5\" (UniqueName: \"kubernetes.io/projected/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-kube-api-access-qgvv5\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.468415 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29495520-x6t57"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.477423 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.489854 5103 generic.go:358] "Generic (PLEG): container finished" podID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" containerID="8cfe21d64c3f3bd883b20a431e0d46df831383bde83f5c38c04c36b9c506f63b" exitCode=0 Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.490012 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.511947 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.523415 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526213 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4022194a-f5e9-494f-b079-ddd414c3da50-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526290 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4022194a-f5e9-494f-b079-ddd414c3da50-tmp\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526333 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4022194a-f5e9-494f-b079-ddd414c3da50-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526413 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-trusted-ca\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526487 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6lw4q\" (UniqueName: \"kubernetes.io/projected/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-kube-api-access-6lw4q\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526510 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lskwx\" (UniqueName: \"kubernetes.io/projected/4022194a-f5e9-494f-b079-ddd414c3da50-kube-api-access-lskwx\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526529 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-config\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526551 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4022194a-f5e9-494f-b079-ddd414c3da50-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526568 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-serving-cert\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526594 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/4022194a-f5e9-494f-b079-ddd414c3da50-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.527992 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-config\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.528266 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-trusted-ca\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.537983 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-serving-cert\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.558196 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lw4q\" (UniqueName: \"kubernetes.io/projected/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-kube-api-access-6lw4q\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.569485 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerDied","Data":"8cfe21d64c3f3bd883b20a431e0d46df831383bde83f5c38c04c36b9c506f63b"} Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.569549 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.573307 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.574532 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.581153 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.581735 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.629497 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4022194a-f5e9-494f-b079-ddd414c3da50-tmp\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.629720 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2gvx\" (UniqueName: \"kubernetes.io/projected/c5938973-a6f9-4d60-b605-3f02b2c1c84f-kube-api-access-t2gvx\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.629820 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4022194a-f5e9-494f-b079-ddd414c3da50-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.629940 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lskwx\" (UniqueName: \"kubernetes.io/projected/4022194a-f5e9-494f-b079-ddd414c3da50-kube-api-access-lskwx\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.630058 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4022194a-f5e9-494f-b079-ddd414c3da50-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.630171 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/4022194a-f5e9-494f-b079-ddd414c3da50-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.630421 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.630492 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4022194a-f5e9-494f-b079-ddd414c3da50-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.632335 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/4022194a-f5e9-494f-b079-ddd414c3da50-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.638833 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.639327 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4022194a-f5e9-494f-b079-ddd414c3da50-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.644075 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4022194a-f5e9-494f-b079-ddd414c3da50-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.649974 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lskwx\" (UniqueName: \"kubernetes.io/projected/4022194a-f5e9-494f-b079-ddd414c3da50-kube-api-access-lskwx\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.654418 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4022194a-f5e9-494f-b079-ddd414c3da50-tmp\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.658637 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4022194a-f5e9-494f-b079-ddd414c3da50-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.675495 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.731405 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t2gvx\" (UniqueName: \"kubernetes.io/projected/c5938973-a6f9-4d60-b605-3f02b2c1c84f-kube-api-access-t2gvx\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.731568 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.732101 5103 configmap.go:193] Couldn't get configMap openshift-image-registry/serviceca: object "openshift-image-registry"/"serviceca" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.732149 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca podName:c5938973-a6f9-4d60-b605-3f02b2c1c84f nodeName:}" failed. No retries permitted until 2026-01-30 00:12:02.232133977 +0000 UTC m=+112.103632029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serviceca" (UniqueName: "kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca") pod "image-pruner-29495520-x6t57" (UID: "c5938973-a6f9-4d60-b605-3f02b2c1c84f") : object "openshift-image-registry"/"serviceca" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.754735 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2gvx\" (UniqueName: \"kubernetes.io/projected/c5938973-a6f9-4d60-b605-3f02b2c1c84f-kube-api-access-t2gvx\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.763871 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-clmhf"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.763925 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-j77tr"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.764000 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.767306 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.767328 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.832765 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22187967-c3cb-4aec-b6d5-65c7c6167554-config\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.832840 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22187967-c3cb-4aec-b6d5-65c7c6167554-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.832964 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5qk2\" (UniqueName: \"kubernetes.io/projected/22187967-c3cb-4aec-b6d5-65c7c6167554-kube-api-access-x5qk2\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.833035 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22187967-c3cb-4aec-b6d5-65c7c6167554-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: W0130 00:12:01.882779 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfb3c35d_63fc_4a35_91ea_ef0e217fc5d0.slice/crio-0db1d5950bc0b6f804c3674ce2fba82afbbed1a38d14ee614c6321e7221e4482 WatchSource:0}: Error finding container 0db1d5950bc0b6f804c3674ce2fba82afbbed1a38d14ee614c6321e7221e4482: Status 404 returned error can't find the container with id 0db1d5950bc0b6f804c3674ce2fba82afbbed1a38d14ee614c6321e7221e4482 Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.923800 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.934171 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22187967-c3cb-4aec-b6d5-65c7c6167554-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.934208 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22187967-c3cb-4aec-b6d5-65c7c6167554-config\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.934236 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22187967-c3cb-4aec-b6d5-65c7c6167554-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.934301 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x5qk2\" (UniqueName: \"kubernetes.io/projected/22187967-c3cb-4aec-b6d5-65c7c6167554-kube-api-access-x5qk2\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.935158 5103 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.935268 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22187967-c3cb-4aec-b6d5-65c7c6167554-serving-cert podName:22187967-c3cb-4aec-b6d5-65c7c6167554 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:02.435247168 +0000 UTC m=+112.306745221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/22187967-c3cb-4aec-b6d5-65c7c6167554-serving-cert") pod "openshift-controller-manager-operator-686468bdd5-8qhdx" (UID: "22187967-c3cb-4aec-b6d5-65c7c6167554") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.935456 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22187967-c3cb-4aec-b6d5-65c7c6167554-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.935158 5103 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.935507 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22187967-c3cb-4aec-b6d5-65c7c6167554-config podName:22187967-c3cb-4aec-b6d5-65c7c6167554 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:02.435498695 +0000 UTC m=+112.306996747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/22187967-c3cb-4aec-b6d5-65c7c6167554-config") pod "openshift-controller-manager-operator-686468bdd5-8qhdx" (UID: "22187967-c3cb-4aec-b6d5-65c7c6167554") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.947270 5103 projected.go:289] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.947782 5103 projected.go:289] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.947796 5103 projected.go:194] Error preparing data for projected volume kube-api-access-x5qk2 for pod openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.947891 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22187967-c3cb-4aec-b6d5-65c7c6167554-kube-api-access-x5qk2 podName:22187967-c3cb-4aec-b6d5-65c7c6167554 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:02.447868415 +0000 UTC m=+112.319366467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x5qk2" (UniqueName: "kubernetes.io/projected/22187967-c3cb-4aec-b6d5-65c7c6167554-kube-api-access-x5qk2") pod "openshift-controller-manager-operator-686468bdd5-8qhdx" (UID: "22187967-c3cb-4aec-b6d5-65c7c6167554") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Jan 30 00:12:02 crc kubenswrapper[5103]: W0130 00:12:02.087031 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4022194a_f5e9_494f_b079_ddd414c3da50.slice/crio-52800e681b19c425661b7acb6844d74c5edebe5887b2651595561939539241d6 WatchSource:0}: Error finding container 52800e681b19c425661b7acb6844d74c5edebe5887b2651595561939539241d6: Status 404 returned error can't find the container with id 52800e681b19c425661b7acb6844d74c5edebe5887b2651595561939539241d6 Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.238749 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.239848 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.374957 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b"] Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.375254 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.376235 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.376429 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.376567 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.376704 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.377032 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.382145 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.382431 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.382831 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.383689 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.383957 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.384304 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.384428 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.384522 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.384724 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.384839 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.384939 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.385029 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.385273 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.385288 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.385679 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.440892 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22187967-c3cb-4aec-b6d5-65c7c6167554-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.441005 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22187967-c3cb-4aec-b6d5-65c7c6167554-config\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.441030 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj26b\" (UniqueName: \"kubernetes.io/projected/5f40ccbb-715c-4854-b28f-ab8055375c91-kube-api-access-jj26b\") pod \"downloads-747b44746d-j77tr\" (UID: \"5f40ccbb-715c-4854-b28f-ab8055375c91\") " pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.442952 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22187967-c3cb-4aec-b6d5-65c7c6167554-config\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.447087 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22187967-c3cb-4aec-b6d5-65c7c6167554-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.542581 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x5qk2\" (UniqueName: \"kubernetes.io/projected/22187967-c3cb-4aec-b6d5-65c7c6167554-kube-api-access-x5qk2\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.543118 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jj26b\" (UniqueName: \"kubernetes.io/projected/5f40ccbb-715c-4854-b28f-ab8055375c91-kube-api-access-jj26b\") pod \"downloads-747b44746d-j77tr\" (UID: \"5f40ccbb-715c-4854-b28f-ab8055375c91\") " pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.574961 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5qk2\" (UniqueName: \"kubernetes.io/projected/22187967-c3cb-4aec-b6d5-65c7c6167554-kube-api-access-x5qk2\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.581375 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj26b\" (UniqueName: \"kubernetes.io/projected/5f40ccbb-715c-4854-b28f-ab8055375c91-kube-api-access-jj26b\") pod \"downloads-747b44746d-j77tr\" (UID: \"5f40ccbb-715c-4854-b28f-ab8055375c91\") " pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:02 crc kubenswrapper[5103]: W0130 00:12:02.635012 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5938973_a6f9_4d60_b605_3f02b2c1c84f.slice/crio-f110469e2ef62c0b54ea25d9e9c5273b55bbc9a77eb25e1ad48e65441633b3d0 WatchSource:0}: Error finding container f110469e2ef62c0b54ea25d9e9c5273b55bbc9a77eb25e1ad48e65441633b3d0: Status 404 returned error can't find the container with id f110469e2ef62c0b54ea25d9e9c5273b55bbc9a77eb25e1ad48e65441633b3d0 Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.666814 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" event={"ID":"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204","Type":"ContainerStarted","Data":"9131b9500cdfd415e7ec77b417734cc2ba2d9446de26cd67b54fba245814badb"} Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.666884 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc"] Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.666986 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.669484 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.669775 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.670019 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.670422 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.670839 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.745929 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-config\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.745987 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.746019 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxq6v\" (UniqueName: \"kubernetes.io/projected/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-kube-api-access-mxq6v\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.773772 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.779900 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.847366 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-config\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.847427 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.847458 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mxq6v\" (UniqueName: \"kubernetes.io/projected/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-kube-api-access-mxq6v\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.849255 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-config\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.865014 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.873450 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxq6v\" (UniqueName: \"kubernetes.io/projected/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-kube-api-access-mxq6v\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.894230 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.897028 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.903507 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.904351 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.912973 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" event={"ID":"91703ab7-2f05-4831-8200-85210adf830b","Type":"ContainerStarted","Data":"a0a8569837d450b0258dafe39d145a428bb48817a83228c57446c186695e2e5c"} Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.913017 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" event={"ID":"f80439cc-c38d-4210-a203-f478704d9dcd","Type":"ContainerStarted","Data":"9a5cd267c6e2d0a20dd4f22ec274fd163a4524dbbd1f722646c7daaf7c0264df"} Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.913031 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4ltx6"] Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.988697 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.052705 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.052749 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.052782 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82mxx\" (UniqueName: \"kubernetes.io/projected/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-kube-api-access-82mxx\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.052808 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-images\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.087116 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.087339 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.089728 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.089875 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.123429 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" event={"ID":"f80439cc-c38d-4210-a203-f478704d9dcd","Type":"ContainerStarted","Data":"6eb6a3b8b96fafcdc3da9bddd43f830e446a37a40daab0ee1333d5204dfecefe"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.123494 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" event={"ID":"a0ff7eb1-7b00-4318-936e-30862acd97e5","Type":"ContainerStarted","Data":"dfa2d5328b163a06e0784ef6748b897dd97edce0f633750bb32fdbd9501d39e5"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.123517 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.123751 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.129283 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.129401 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.129594 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.129724 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.147406 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" event={"ID":"d3abf3af-b96a-44fa-bd40-1c92bab19b92","Type":"ContainerStarted","Data":"f01ae49c3dbf6ce1c41262f39b1cfb6c8326085cddd7aa8f645756c56fc66e24"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.147459 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" event={"ID":"e9100695-b78d-4b2f-9cea-9d022064c792","Type":"ContainerStarted","Data":"c80a2cc41703a4137b5b54d52cddf220a4c7bc6710518ed255865caec779f53a"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.147482 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" event={"ID":"4022194a-f5e9-494f-b079-ddd414c3da50","Type":"ContainerStarted","Data":"52800e681b19c425661b7acb6844d74c5edebe5887b2651595561939539241d6"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.147506 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" event={"ID":"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0","Type":"ContainerStarted","Data":"0db1d5950bc0b6f804c3674ce2fba82afbbed1a38d14ee614c6321e7221e4482"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.147506 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.147526 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-5tp7b"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.149461 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.149836 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.150088 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.154522 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxrjh\" (UniqueName: \"kubernetes.io/projected/c3bbecd7-5e60-4290-bc24-b4f292d0d515-kube-api-access-gxrjh\") pod \"multus-admission-controller-69db94689b-4ltx6\" (UID: \"c3bbecd7-5e60-4290-bc24-b4f292d0d515\") " pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.154643 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.154683 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.154767 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-82mxx\" (UniqueName: \"kubernetes.io/projected/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-kube-api-access-82mxx\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.154832 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-images\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.154891 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3bbecd7-5e60-4290-bc24-b4f292d0d515-webhook-certs\") pod \"multus-admission-controller-69db94689b-4ltx6\" (UID: \"c3bbecd7-5e60-4290-bc24-b4f292d0d515\") " pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.155694 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.155886 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-images\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.163041 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.173142 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-82mxx\" (UniqueName: \"kubernetes.io/projected/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-kube-api-access-82mxx\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.196578 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.196762 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.201636 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.201888 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.202074 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.202124 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.202154 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.202739 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.255956 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkh5s\" (UniqueName: \"kubernetes.io/projected/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-kube-api-access-mkh5s\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.256960 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3bbecd7-5e60-4290-bc24-b4f292d0d515-webhook-certs\") pod \"multus-admission-controller-69db94689b-4ltx6\" (UID: \"c3bbecd7-5e60-4290-bc24-b4f292d0d515\") " pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.256267 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.257033 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-apiservice-cert\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.257284 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds4fl\" (UniqueName: \"kubernetes.io/projected/ca146169-65b5-4eed-be41-43bb8bf87656-kube-api-access-ds4fl\") pod \"migrator-866fcbc849-6mbbh\" (UID: \"ca146169-65b5-4eed-be41-43bb8bf87656\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.257317 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-tmpfs\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.257349 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-webhook-cert\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.257467 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gxrjh\" (UniqueName: \"kubernetes.io/projected/c3bbecd7-5e60-4290-bc24-b4f292d0d515-kube-api-access-gxrjh\") pod \"multus-admission-controller-69db94689b-4ltx6\" (UID: \"c3bbecd7-5e60-4290-bc24-b4f292d0d515\") " pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.261406 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3bbecd7-5e60-4290-bc24-b4f292d0d515-webhook-certs\") pod \"multus-admission-controller-69db94689b-4ltx6\" (UID: \"c3bbecd7-5e60-4290-bc24-b4f292d0d515\") " pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: W0130 00:12:03.275412 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ee6bca0_0d30_4653_b2a4_a79ebde1fed9.slice/crio-d2250273cf500f9bfbca6370eaf5ca7b122825db77e6dec30e391d7de2e7b858 WatchSource:0}: Error finding container d2250273cf500f9bfbca6370eaf5ca7b122825db77e6dec30e391d7de2e7b858: Status 404 returned error can't find the container with id d2250273cf500f9bfbca6370eaf5ca7b122825db77e6dec30e391d7de2e7b858 Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.287607 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxrjh\" (UniqueName: \"kubernetes.io/projected/c3bbecd7-5e60-4290-bc24-b4f292d0d515-kube-api-access-gxrjh\") pod \"multus-admission-controller-69db94689b-4ltx6\" (UID: \"c3bbecd7-5e60-4290-bc24-b4f292d0d515\") " pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.344200 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.374720 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3b3db2b-ab99-483b-a13c-4947269bc330-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.374783 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ds4fl\" (UniqueName: \"kubernetes.io/projected/ca146169-65b5-4eed-be41-43bb8bf87656-kube-api-access-ds4fl\") pod \"migrator-866fcbc849-6mbbh\" (UID: \"ca146169-65b5-4eed-be41-43bb8bf87656\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.374874 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-tmpfs\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375228 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-webhook-cert\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375414 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3b3db2b-ab99-483b-a13c-4947269bc330-config\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375461 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw55p\" (UniqueName: \"kubernetes.io/projected/f3b3db2b-ab99-483b-a13c-4947269bc330-kube-api-access-fw55p\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375528 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-tmpfs\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375540 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mkh5s\" (UniqueName: \"kubernetes.io/projected/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-kube-api-access-mkh5s\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375619 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3b3db2b-ab99-483b-a13c-4947269bc330-images\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375670 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-apiservice-cert\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.380149 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-webhook-cert\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.384020 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-apiservice-cert\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.391931 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds4fl\" (UniqueName: \"kubernetes.io/projected/ca146169-65b5-4eed-be41-43bb8bf87656-kube-api-access-ds4fl\") pod \"migrator-866fcbc849-6mbbh\" (UID: \"ca146169-65b5-4eed-be41-43bb8bf87656\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.392689 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkh5s\" (UniqueName: \"kubernetes.io/projected/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-kube-api-access-mkh5s\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.423547 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.439479 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.439686 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.439907 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.445802 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.445918 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446110 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446283 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446440 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446748 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446830 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446875 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446967 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.447084 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.447751 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.447937 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.449481 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.476612 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fw55p\" (UniqueName: \"kubernetes.io/projected/f3b3db2b-ab99-483b-a13c-4947269bc330-kube-api-access-fw55p\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.476652 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/23500895-f472-4de5-afda-f1cc02807ceb-tmp-dir\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.476687 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s89jd\" (UniqueName: \"kubernetes.io/projected/23500895-f472-4de5-afda-f1cc02807ceb-kube-api-access-s89jd\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477124 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3b3db2b-ab99-483b-a13c-4947269bc330-images\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477159 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23500895-f472-4de5-afda-f1cc02807ceb-etcd-client\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477360 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3b3db2b-ab99-483b-a13c-4947269bc330-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477417 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-etcd-ca\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477453 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-config\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477480 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3b3db2b-ab99-483b-a13c-4947269bc330-config\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477514 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-etcd-service-ca\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477539 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23500895-f472-4de5-afda-f1cc02807ceb-serving-cert\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.478318 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3b3db2b-ab99-483b-a13c-4947269bc330-images\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.480612 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3b3db2b-ab99-483b-a13c-4947269bc330-config\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.480711 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3b3db2b-ab99-483b-a13c-4947269bc330-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.493231 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw55p\" (UniqueName: \"kubernetes.io/projected/f3b3db2b-ab99-483b-a13c-4947269bc330-kube-api-access-fw55p\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.523918 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-7v6vx"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.524022 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.526179 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.528842 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.529305 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.529459 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.532741 5103 generic.go:358] "Generic (PLEG): container finished" podID="e9100695-b78d-4b2f-9cea-9d022064c792" containerID="911cfd942b49cf6ceaca0342397db4702338409b8ea3eddfbf7731f2ad3b5a53" exitCode=0 Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.549143 5103 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-spmxr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.549258 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.563206 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.563250 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.563482 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.568639 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.568958 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.569145 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.569462 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.569635 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.570645 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.574762 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579271 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s89jd\" (UniqueName: \"kubernetes.io/projected/23500895-f472-4de5-afda-f1cc02807ceb-kube-api-access-s89jd\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579347 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23500895-f472-4de5-afda-f1cc02807ceb-etcd-client\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579394 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579450 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-etcd-ca\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579473 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579507 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-config\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579531 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5vl7\" (UniqueName: \"kubernetes.io/projected/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-kube-api-access-g5vl7\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579570 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-etcd-service-ca\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579651 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23500895-f472-4de5-afda-f1cc02807ceb-serving-cert\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579707 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/23500895-f472-4de5-afda-f1cc02807ceb-tmp-dir\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.582719 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-etcd-service-ca\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.582945 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-config\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.583268 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-etcd-ca\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.586941 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/23500895-f472-4de5-afda-f1cc02807ceb-tmp-dir\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.591271 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23500895-f472-4de5-afda-f1cc02807ceb-etcd-client\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.591825 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23500895-f472-4de5-afda-f1cc02807ceb-serving-cert\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.600410 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s89jd\" (UniqueName: \"kubernetes.io/projected/23500895-f472-4de5-afda-f1cc02807ceb-kube-api-access-s89jd\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.619693 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" event={"ID":"91703ab7-2f05-4831-8200-85210adf830b","Type":"ContainerStarted","Data":"f0078a0ce155b37c23086d472f5f677a2cdb7136a582b7aeb8db53e9394aa660"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.619743 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" event={"ID":"d3abf3af-b96a-44fa-bd40-1c92bab19b92","Type":"ContainerStarted","Data":"8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.619758 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" event={"ID":"e9100695-b78d-4b2f-9cea-9d022064c792","Type":"ContainerDied","Data":"911cfd942b49cf6ceaca0342397db4702338409b8ea3eddfbf7731f2ad3b5a53"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.619774 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" event={"ID":"22187967-c3cb-4aec-b6d5-65c7c6167554","Type":"ContainerStarted","Data":"beafbe80e56c1ea1eef4b374e8294a506eec237632db812c7cf796d7effbab33"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.619788 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerStarted","Data":"9c8f4b52155e3ae7036283a61c621da7d9510d4baa4a6376d7850ec6f82cd529"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.620017 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" event={"ID":"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9","Type":"ContainerStarted","Data":"d2250273cf500f9bfbca6370eaf5ca7b122825db77e6dec30e391d7de2e7b858"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.620039 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-x6t57" event={"ID":"c5938973-a6f9-4d60-b605-3f02b2c1c84f","Type":"ContainerStarted","Data":"f110469e2ef62c0b54ea25d9e9c5273b55bbc9a77eb25e1ad48e65441633b3d0"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.620070 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"7e56449ddcc6bfdcfae161b44edb397e26d63a11513d624eed0735d0abe80820"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.620088 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.622342 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.622771 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" podStartSLOduration=90.622706965 podStartE2EDuration="1m30.622706965s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:03.583883552 +0000 UTC m=+113.455381644" watchObservedRunningTime="2026-01-30 00:12:03.622706965 +0000 UTC m=+113.494205037" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.625794 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.626387 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.627263 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.627470 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.650589 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jfm6p"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.650766 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.654935 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.654987 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.681290 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b196a79-ecff-4ec8-8338-33436cfd3dcc-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6w9kf\" (UID: \"4b196a79-ecff-4ec8-8338-33436cfd3dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.682968 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9n6v\" (UniqueName: \"kubernetes.io/projected/9bef77c6-141b-4cff-a91d-7515860a6a2a-kube-api-access-r9n6v\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.683108 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-serving-cert\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.683268 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.684308 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.686298 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.687931 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-oauth-serving-cert\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688013 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-config\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688110 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688140 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-oauth-config\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688144 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688176 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-service-ca\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688193 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-trusted-ca-bundle\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688218 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g5vl7\" (UniqueName: \"kubernetes.io/projected/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-kube-api-access-g5vl7\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688601 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59m9p\" (UniqueName: \"kubernetes.io/projected/4b196a79-ecff-4ec8-8338-33436cfd3dcc-kube-api-access-59m9p\") pod \"cluster-samples-operator-6b564684c8-6w9kf\" (UID: \"4b196a79-ecff-4ec8-8338-33436cfd3dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.692391 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.693347 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.693826 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.702172 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.709438 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5vl7\" (UniqueName: \"kubernetes.io/projected/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-kube-api-access-g5vl7\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.709688 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.709867 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.713309 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.713522 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.713766 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.714974 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.724197 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.743460 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-6z46s"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.743584 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.743511 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.755076 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.755186 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.755653 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.756125 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.757552 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.762303 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.762338 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.764571 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.766636 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.771201 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.778119 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.779110 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.779526 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.781641 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791353 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r9n6v\" (UniqueName: \"kubernetes.io/projected/9bef77c6-141b-4cff-a91d-7515860a6a2a-kube-api-access-r9n6v\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791441 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-registry-tls\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791604 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4shg\" (UniqueName: \"kubernetes.io/projected/35998b47-ed37-4a50-9553-18147918d9cb-kube-api-access-c4shg\") pod \"control-plane-machine-set-operator-75ffdb6fcd-94r9t\" (UID: \"35998b47-ed37-4a50-9553-18147918d9cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791645 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791790 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-trusted-ca\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791857 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plqc7\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-kube-api-access-plqc7\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791902 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-serving-cert\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.792227 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/35998b47-ed37-4a50-9553-18147918d9cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-94r9t\" (UID: \"35998b47-ed37-4a50-9553-18147918d9cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: E0130 00:12:03.792396 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.292376484 +0000 UTC m=+114.163874536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.793373 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.794309 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.794846 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc8aa23-eb1a-486e-9462-499486335cdc-config\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.794887 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8dc8aa23-eb1a-486e-9462-499486335cdc-tmp-dir\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795003 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-registry-certificates\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795030 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d69ff998-a349-40e4-8653-bfded7d60952-installation-pull-secrets\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795117 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-oauth-serving-cert\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795180 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-config\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795203 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc8aa23-eb1a-486e-9462-499486335cdc-serving-cert\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795264 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-oauth-config\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795285 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8dc8aa23-eb1a-486e-9462-499486335cdc-kube-api-access\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795345 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-service-ca\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795368 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-trusted-ca-bundle\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795414 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d69ff998-a349-40e4-8653-bfded7d60952-ca-trust-extracted\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795471 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-59m9p\" (UniqueName: \"kubernetes.io/projected/4b196a79-ecff-4ec8-8338-33436cfd3dcc-kube-api-access-59m9p\") pod \"cluster-samples-operator-6b564684c8-6w9kf\" (UID: \"4b196a79-ecff-4ec8-8338-33436cfd3dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795498 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-bound-sa-token\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795562 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b196a79-ecff-4ec8-8338-33436cfd3dcc-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6w9kf\" (UID: \"4b196a79-ecff-4ec8-8338-33436cfd3dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.796110 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-oauth-serving-cert\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.796638 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-config\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.796900 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-service-ca\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.798101 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-trusted-ca-bundle\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.798112 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.798265 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.798406 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.798601 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.802821 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-serving-cert\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.804736 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-oauth-config\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.807877 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b196a79-ecff-4ec8-8338-33436cfd3dcc-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6w9kf\" (UID: \"4b196a79-ecff-4ec8-8338-33436cfd3dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.842030 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9n6v\" (UniqueName: \"kubernetes.io/projected/9bef77c6-141b-4cff-a91d-7515860a6a2a-kube-api-access-r9n6v\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.851062 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-59m9p\" (UniqueName: \"kubernetes.io/projected/4b196a79-ecff-4ec8-8338-33436cfd3dcc-kube-api-access-59m9p\") pod \"cluster-samples-operator-6b564684c8-6w9kf\" (UID: \"4b196a79-ecff-4ec8-8338-33436cfd3dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.858951 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.860775 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.862071 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896339 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896461 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-config\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896508 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc8aa23-eb1a-486e-9462-499486335cdc-serving-cert\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896528 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8dc8aa23-eb1a-486e-9462-499486335cdc-kube-api-access\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896551 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8550f022-16a5-4fac-a94e-fc322ee0cb9d-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896568 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9dfcfad-0e85-4b3e-9a33-3729f7033251-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896587 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8550f022-16a5-4fac-a94e-fc322ee0cb9d-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896605 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896624 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d69ff998-a349-40e4-8653-bfded7d60952-ca-trust-extracted\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896641 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-bound-sa-token\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896659 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7btzr\" (UniqueName: \"kubernetes.io/projected/3ed247cb-77c1-47fb-ad58-f14f03aae2f2-kube-api-access-7btzr\") pod \"package-server-manager-77f986bd66-mv94c\" (UID: \"3ed247cb-77c1-47fb-ad58-f14f03aae2f2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896675 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dfcfad-0e85-4b3e-9a33-3729f7033251-config\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896695 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9dfcfad-0e85-4b3e-9a33-3729f7033251-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896716 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9vvc\" (UniqueName: \"kubernetes.io/projected/8550f022-16a5-4fac-a94e-fc322ee0cb9d-kube-api-access-s9vvc\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896732 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c9dfcfad-0e85-4b3e-9a33-3729f7033251-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896756 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-plqc7\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-kube-api-access-plqc7\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896778 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72531653-f2c6-4754-8209-24104364d6f4-serving-cert\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896794 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/42f46e1b-e6a2-499c-9e01-fe08785a78a4-tmpfs\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896819 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/35998b47-ed37-4a50-9553-18147918d9cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-94r9t\" (UID: \"35998b47-ed37-4a50-9553-18147918d9cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896869 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8dc8aa23-eb1a-486e-9462-499486335cdc-tmp-dir\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896888 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896914 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-registry-certificates\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896932 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d69ff998-a349-40e4-8653-bfded7d60952-installation-pull-secrets\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896952 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed247cb-77c1-47fb-ad58-f14f03aae2f2-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-mv94c\" (UID: \"3ed247cb-77c1-47fb-ad58-f14f03aae2f2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896972 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42f46e1b-e6a2-499c-9e01-fe08785a78a4-srv-cert\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896996 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8550f022-16a5-4fac-a94e-fc322ee0cb9d-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897025 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42f46e1b-e6a2-499c-9e01-fe08785a78a4-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897079 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dff8\" (UniqueName: \"kubernetes.io/projected/42f46e1b-e6a2-499c-9e01-fe08785a78a4-kube-api-access-4dff8\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897099 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-registry-tls\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897123 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkm5p\" (UniqueName: \"kubernetes.io/projected/72531653-f2c6-4754-8209-24104364d6f4-kube-api-access-wkm5p\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897142 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-trusted-ca\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897226 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c4shg\" (UniqueName: \"kubernetes.io/projected/35998b47-ed37-4a50-9553-18147918d9cb-kube-api-access-c4shg\") pod \"control-plane-machine-set-operator-75ffdb6fcd-94r9t\" (UID: \"35998b47-ed37-4a50-9553-18147918d9cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897343 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc8aa23-eb1a-486e-9462-499486335cdc-config\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.898682 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-trusted-ca\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: E0130 00:12:03.899425 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.399391962 +0000 UTC m=+114.270890014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.899584 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d69ff998-a349-40e4-8653-bfded7d60952-ca-trust-extracted\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.902293 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-registry-certificates\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.909335 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc8aa23-eb1a-486e-9462-499486335cdc-config\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.913366 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8dc8aa23-eb1a-486e-9462-499486335cdc-tmp-dir\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.914351 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-dtdff"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.914795 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.915747 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-registry-tls\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.916677 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.917632 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d69ff998-a349-40e4-8653-bfded7d60952-installation-pull-secrets\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.917999 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc8aa23-eb1a-486e-9462-499486335cdc-serving-cert\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.918852 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.919367 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.919456 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.919836 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/35998b47-ed37-4a50-9553-18147918d9cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-94r9t\" (UID: \"35998b47-ed37-4a50-9553-18147918d9cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.921307 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8dc8aa23-eb1a-486e-9462-499486335cdc-kube-api-access\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.922740 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4shg\" (UniqueName: \"kubernetes.io/projected/35998b47-ed37-4a50-9553-18147918d9cb-kube-api-access-c4shg\") pod \"control-plane-machine-set-operator-75ffdb6fcd-94r9t\" (UID: \"35998b47-ed37-4a50-9553-18147918d9cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.923100 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.924251 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-bound-sa-token\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.925362 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-plqc7\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-kube-api-access-plqc7\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.975802 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.976834 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.982985 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.994351 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001646 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4dff8\" (UniqueName: \"kubernetes.io/projected/42f46e1b-e6a2-499c-9e01-fe08785a78a4-kube-api-access-4dff8\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001691 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wkm5p\" (UniqueName: \"kubernetes.io/projected/72531653-f2c6-4754-8209-24104364d6f4-kube-api-access-wkm5p\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001759 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-config\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001813 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8550f022-16a5-4fac-a94e-fc322ee0cb9d-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001828 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9dfcfad-0e85-4b3e-9a33-3729f7033251-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001848 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8550f022-16a5-4fac-a94e-fc322ee0cb9d-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001876 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001917 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7btzr\" (UniqueName: \"kubernetes.io/projected/3ed247cb-77c1-47fb-ad58-f14f03aae2f2-kube-api-access-7btzr\") pod \"package-server-manager-77f986bd66-mv94c\" (UID: \"3ed247cb-77c1-47fb-ad58-f14f03aae2f2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001945 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dfcfad-0e85-4b3e-9a33-3729f7033251-config\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001965 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9dfcfad-0e85-4b3e-9a33-3729f7033251-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001989 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s9vvc\" (UniqueName: \"kubernetes.io/projected/8550f022-16a5-4fac-a94e-fc322ee0cb9d-kube-api-access-s9vvc\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002011 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c9dfcfad-0e85-4b3e-9a33-3729f7033251-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002085 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002102 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72531653-f2c6-4754-8209-24104364d6f4-serving-cert\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002122 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/42f46e1b-e6a2-499c-9e01-fe08785a78a4-tmpfs\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002163 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002540 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed247cb-77c1-47fb-ad58-f14f03aae2f2-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-mv94c\" (UID: \"3ed247cb-77c1-47fb-ad58-f14f03aae2f2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002676 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42f46e1b-e6a2-499c-9e01-fe08785a78a4-srv-cert\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002761 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8550f022-16a5-4fac-a94e-fc322ee0cb9d-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.003385 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c9dfcfad-0e85-4b3e-9a33-3729f7033251-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.004223 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dfcfad-0e85-4b3e-9a33-3729f7033251-config\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.005404 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.006058 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.007616 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42f46e1b-e6a2-499c-9e01-fe08785a78a4-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.009309 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9dfcfad-0e85-4b3e-9a33-3729f7033251-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.009806 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.009962 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.509940056 +0000 UTC m=+114.381438178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.010404 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/42f46e1b-e6a2-499c-9e01-fe08785a78a4-tmpfs\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.011781 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-config\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.012548 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.015212 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42f46e1b-e6a2-499c-9e01-fe08785a78a4-srv-cert\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.015869 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed247cb-77c1-47fb-ad58-f14f03aae2f2-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-mv94c\" (UID: \"3ed247cb-77c1-47fb-ad58-f14f03aae2f2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.016132 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42f46e1b-e6a2-499c-9e01-fe08785a78a4-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.031935 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8550f022-16a5-4fac-a94e-fc322ee0cb9d-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.034989 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8550f022-16a5-4fac-a94e-fc322ee0cb9d-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.036653 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.036696 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.042275 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72531653-f2c6-4754-8209-24104364d6f4-serving-cert\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.050433 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.052941 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.061091 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw"] Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.062646 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.066800 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.086008 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.094153 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.106609 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9"] Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.112281 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.111824 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7btzr\" (UniqueName: \"kubernetes.io/projected/3ed247cb-77c1-47fb-ad58-f14f03aae2f2-kube-api-access-7btzr\") pod \"package-server-manager-77f986bd66-mv94c\" (UID: \"3ed247cb-77c1-47fb-ad58-f14f03aae2f2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.112533 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vpwh\" (UniqueName: \"kubernetes.io/projected/f1c445e1-3a33-419a-bd9a-0314b23539f7-kube-api-access-7vpwh\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.112696 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/02410934-0df2-4e17-9042-91fa47becda6-signing-cabundle\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.113966 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.61394155 +0000 UTC m=+114.485439602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.114331 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c445e1-3a33-419a-bd9a-0314b23539f7-config\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.114536 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhvm\" (UniqueName: \"kubernetes.io/projected/02410934-0df2-4e17-9042-91fa47becda6-kube-api-access-wqhvm\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.114628 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c445e1-3a33-419a-bd9a-0314b23539f7-serving-cert\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.115382 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/02410934-0df2-4e17-9042-91fa47becda6-signing-key\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.115666 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.116066 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.616031351 +0000 UTC m=+114.487529403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.129020 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8550f022-16a5-4fac-a94e-fc322ee0cb9d-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.160298 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9dfcfad-0e85-4b3e-9a33-3729f7033251-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.176227 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9vvc\" (UniqueName: \"kubernetes.io/projected/8550f022-16a5-4fac-a94e-fc322ee0cb9d-kube-api-access-s9vvc\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.187328 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:04 crc kubenswrapper[5103]: W0130 00:12:04.192444 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcede4f0_4721_47c1_bc52_b68bf7ad29d4.slice/crio-7ba2f75bcb6ca4a54baf980dd9b4ccb133b13f4bbb15764f2888c2a87e17a239 WatchSource:0}: Error finding container 7ba2f75bcb6ca4a54baf980dd9b4ccb133b13f4bbb15764f2888c2a87e17a239: Status 404 returned error can't find the container with id 7ba2f75bcb6ca4a54baf980dd9b4ccb133b13f4bbb15764f2888c2a87e17a239 Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.193460 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.196200 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkm5p\" (UniqueName: \"kubernetes.io/projected/72531653-f2c6-4754-8209-24104364d6f4-kube-api-access-wkm5p\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.200952 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t"] Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.202551 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.210023 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217235 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217382 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/02410934-0df2-4e17-9042-91fa47becda6-signing-key\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217413 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxdw\" (UniqueName: \"kubernetes.io/projected/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-kube-api-access-smxdw\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217437 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a3a441e4-5ade-4309-938a-0f4fe130a721-tmpfs\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217455 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-config-volume\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217548 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-secret-volume\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217636 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vpwh\" (UniqueName: \"kubernetes.io/projected/f1c445e1-3a33-419a-bd9a-0314b23539f7-kube-api-access-7vpwh\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217754 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/02410934-0df2-4e17-9042-91fa47becda6-signing-cabundle\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217777 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a3a441e4-5ade-4309-938a-0f4fe130a721-profile-collector-cert\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217808 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c445e1-3a33-419a-bd9a-0314b23539f7-config\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217850 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w9pn\" (UniqueName: \"kubernetes.io/projected/a3a441e4-5ade-4309-938a-0f4fe130a721-kube-api-access-9w9pn\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217877 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wqhvm\" (UniqueName: \"kubernetes.io/projected/02410934-0df2-4e17-9042-91fa47becda6-kube-api-access-wqhvm\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217892 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c445e1-3a33-419a-bd9a-0314b23539f7-serving-cert\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.218138 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.71812023 +0000 UTC m=+114.589618272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.219570 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/02410934-0df2-4e17-9042-91fa47becda6-signing-cabundle\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.219933 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c445e1-3a33-419a-bd9a-0314b23539f7-config\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.221613 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a3a441e4-5ade-4309-938a-0f4fe130a721-srv-cert\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.221775 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dff8\" (UniqueName: \"kubernetes.io/projected/42f46e1b-e6a2-499c-9e01-fe08785a78a4-kube-api-access-4dff8\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: W0130 00:12:04.226167 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3b3db2b_ab99_483b_a13c_4947269bc330.slice/crio-64fbde2915247d404cac68a838f6050ce8b4c6a918187c78d666a0041688dd1b WatchSource:0}: Error finding container 64fbde2915247d404cac68a838f6050ce8b4c6a918187c78d666a0041688dd1b: Status 404 returned error can't find the container with id 64fbde2915247d404cac68a838f6050ce8b4c6a918187c78d666a0041688dd1b Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.234617 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/02410934-0df2-4e17-9042-91fa47becda6-signing-key\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.239405 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c445e1-3a33-419a-bd9a-0314b23539f7-serving-cert\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.252295 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.273759 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.274155 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mf247"] Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.277246 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.322848 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a3a441e4-5ade-4309-938a-0f4fe130a721-srv-cert\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.322930 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-smxdw\" (UniqueName: \"kubernetes.io/projected/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-kube-api-access-smxdw\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.322953 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.322972 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a3a441e4-5ade-4309-938a-0f4fe130a721-tmpfs\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.322990 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-config-volume\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.323018 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-secret-volume\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.323076 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a3a441e4-5ade-4309-938a-0f4fe130a721-profile-collector-cert\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.323110 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9w9pn\" (UniqueName: \"kubernetes.io/projected/a3a441e4-5ade-4309-938a-0f4fe130a721-kube-api-access-9w9pn\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.325496 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-config-volume\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.325424 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a3a441e4-5ade-4309-938a-0f4fe130a721-tmpfs\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.326344 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.826329277 +0000 UTC m=+114.697827319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.329933 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-secret-volume\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.330683 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vpwh\" (UniqueName: \"kubernetes.io/projected/f1c445e1-3a33-419a-bd9a-0314b23539f7-kube-api-access-7vpwh\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.336192 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a3a441e4-5ade-4309-938a-0f4fe130a721-srv-cert\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.342551 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a3a441e4-5ade-4309-938a-0f4fe130a721-profile-collector-cert\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: W0130 00:12:04.348230 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23500895_f472_4de5_afda_f1cc02807ceb.slice/crio-d2bb55bbf63772c72d65c62fbddf187a43731548106280f84e16786c76288a58 WatchSource:0}: Error finding container d2bb55bbf63772c72d65c62fbddf187a43731548106280f84e16786c76288a58: Status 404 returned error can't find the container with id d2bb55bbf63772c72d65c62fbddf187a43731548106280f84e16786c76288a58 Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.358801 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqhvm\" (UniqueName: \"kubernetes.io/projected/02410934-0df2-4e17-9042-91fa47becda6-kube-api-access-wqhvm\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.369275 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.390951 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427098 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427537 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-config\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427571 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427590 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427617 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427687 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/179763d8-8dea-40e5-ba89-1a848fbf519a-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427815 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whntj\" (UniqueName: \"kubernetes.io/projected/179763d8-8dea-40e5-ba89-1a848fbf519a-kube-api-access-whntj\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427902 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/179763d8-8dea-40e5-ba89-1a848fbf519a-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.428363 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.928348333 +0000 UTC m=+114.799846375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.431286 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.439281 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.453361 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w9pn\" (UniqueName: \"kubernetes.io/projected/a3a441e4-5ade-4309-938a-0f4fe130a721-kube-api-access-9w9pn\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.475912 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.481751 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-smxdw\" (UniqueName: \"kubernetes.io/projected/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-kube-api-access-smxdw\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530036 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530156 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-config\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530184 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530200 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530224 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530253 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/179763d8-8dea-40e5-ba89-1a848fbf519a-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530324 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-whntj\" (UniqueName: \"kubernetes.io/projected/179763d8-8dea-40e5-ba89-1a848fbf519a-kube-api-access-whntj\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530365 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/179763d8-8dea-40e5-ba89-1a848fbf519a-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.531564 5103 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.531669 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-config podName:0ebc9fa5-f75b-4468-b4b8-83695dd067b6 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.031650781 +0000 UTC m=+114.903148833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-config") pod "kube-controller-manager-operator-69d5f845f8-w4q8t" (UID: "0ebc9fa5-f75b-4468-b4b8-83695dd067b6") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.532479 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.032470391 +0000 UTC m=+114.903968443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.533713 5103 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.533790 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-serving-cert podName:0ebc9fa5-f75b-4468-b4b8-83695dd067b6 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.033772863 +0000 UTC m=+114.905270985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-serving-cert") pod "kube-controller-manager-operator-69d5f845f8-w4q8t" (UID: "0ebc9fa5-f75b-4468-b4b8-83695dd067b6") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.536626 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/179763d8-8dea-40e5-ba89-1a848fbf519a-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.537178 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.567098 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.567159 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/179763d8-8dea-40e5-ba89-1a848fbf519a-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.577707 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.578177 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-whntj\" (UniqueName: \"kubernetes.io/projected/179763d8-8dea-40e5-ba89-1a848fbf519a-kube-api-access-whntj\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.586371 5103 projected.go:289] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.586404 5103 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.586478 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-kube-api-access podName:0ebc9fa5-f75b-4468-b4b8-83695dd067b6 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.086455522 +0000 UTC m=+114.957953574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-kube-api-access") pod "kube-controller-manager-operator-69d5f845f8-w4q8t" (UID: "0ebc9fa5-f75b-4468-b4b8-83695dd067b6") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: W0130 00:12:04.601934 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dc8aa23_eb1a_486e_9462_499486335cdc.slice/crio-408e260443325c70f9c09694dafcfc66a246ef2b0a79a37358551cb0bc1e8007 WatchSource:0}: Error finding container 408e260443325c70f9c09694dafcfc66a246ef2b0a79a37358551cb0bc1e8007: Status 404 returned error can't find the container with id 408e260443325c70f9c09694dafcfc66a246ef2b0a79a37358551cb0bc1e8007 Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.631241 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.631610 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.131589938 +0000 UTC m=+115.003087990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.655662 5103 generic.go:358] "Generic (PLEG): container finished" podID="91703ab7-2f05-4831-8200-85210adf830b" containerID="f0078a0ce155b37c23086d472f5f677a2cdb7136a582b7aeb8db53e9394aa660" exitCode=0 Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.692539 5103 generic.go:358] "Generic (PLEG): container finished" podID="a0ff7eb1-7b00-4318-936e-30862acd97e5" containerID="794ade07b1fe5623465f764c5eaf8d3c479eeb7e9a2066ff11ca2f40c30e5324" exitCode=0 Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.716551 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.734036 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.734413 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.234397544 +0000 UTC m=+115.105895596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.744579 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-n8bvp"] Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.747954 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.757889 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.758094 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.758349 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.758562 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.780125 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.804953 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.836402 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.836809 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.836856 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-tmp\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.836938 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.836960 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.837092 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.337076136 +0000 UTC m=+115.208574188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.943193 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-tmp\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.943609 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.943658 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.943779 5103 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.943878 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-tmp\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.943763 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.943886 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca podName:b15f695a-0fc1-4ab5-aad2-341f3bf6822d nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.443859129 +0000 UTC m=+115.315357181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca") pod "marketplace-operator-547dbd544d-mf247" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.943890 5103 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.943975 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics podName:b15f695a-0fc1-4ab5-aad2-341f3bf6822d nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.443964021 +0000 UTC m=+115.315462073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics") pod "marketplace-operator-547dbd544d-mf247" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.944181 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.444172766 +0000 UTC m=+115.315670818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.944264 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.958407 5103 projected.go:289] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.958536 5103 projected.go:289] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.958558 5103 projected.go:194] Error preparing data for projected volume kube-api-access-6bzkw for pod openshift-marketplace/marketplace-operator-547dbd544d-mf247: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.958655 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw podName:b15f695a-0fc1-4ab5-aad2-341f3bf6822d nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.458631727 +0000 UTC m=+115.330129779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6bzkw" (UniqueName: "kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw") pod "marketplace-operator-547dbd544d-mf247" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Jan 30 00:12:04 crc kubenswrapper[5103]: W0130 00:12:04.970178 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1c445e1_3a33_419a_bd9a_0314b23539f7.slice/crio-27c628f9ccc49f2259855656ac2f066826629c44e23db39f2aabb1f7ab48dccb WatchSource:0}: Error finding container 27c628f9ccc49f2259855656ac2f066826629c44e23db39f2aabb1f7ab48dccb: Status 404 returned error can't find the container with id 27c628f9ccc49f2259855656ac2f066826629c44e23db39f2aabb1f7ab48dccb Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.045361 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.045638 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.545580418 +0000 UTC m=+115.417078480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.046281 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.046481 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-config\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.046630 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.546616243 +0000 UTC m=+115.418114305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.046764 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.048137 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-config\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.054106 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.147694 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.147829 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.64780202 +0000 UTC m=+115.519300102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.148072 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.148363 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.148884 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.648867526 +0000 UTC m=+115.520365618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.154849 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: W0130 00:12:05.216706 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3a441e4_5ade_4309_938a_0f4fe130a721.slice/crio-3ff899e4bcbcb1d73af6c5cd292c3cb1cdff3b5962a0453699a9d1ec5f69e662 WatchSource:0}: Error finding container 3ff899e4bcbcb1d73af6c5cd292c3cb1cdff3b5962a0453699a9d1ec5f69e662: Status 404 returned error can't find the container with id 3ff899e4bcbcb1d73af6c5cd292c3cb1cdff3b5962a0453699a9d1ec5f69e662 Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.249515 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.249725 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.749678893 +0000 UTC m=+115.621176985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.250350 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.250815 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.75079067 +0000 UTC m=+115.622288752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.352256 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.352464 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.852431018 +0000 UTC m=+115.723929080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.352965 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.353341 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.853328519 +0000 UTC m=+115.724826581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.377345 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.455963 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.456323 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.456391 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.457002 5103 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.457198 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics podName:b15f695a-0fc1-4ab5-aad2-341f3bf6822d nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.457119119 +0000 UTC m=+116.328617211 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics") pod "marketplace-operator-547dbd544d-mf247" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.457328 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.957304264 +0000 UTC m=+115.828802376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.457407 5103 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.457544 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca podName:b15f695a-0fc1-4ab5-aad2-341f3bf6822d nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.457454277 +0000 UTC m=+116.328952379 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca") pod "marketplace-operator-547dbd544d-mf247" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.558523 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.558637 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.558658 5103 projected.go:289] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.558680 5103 projected.go:289] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.558691 5103 projected.go:194] Error preparing data for projected volume kube-api-access-6bzkw for pod openshift-marketplace/marketplace-operator-547dbd544d-mf247: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.558754 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw podName:b15f695a-0fc1-4ab5-aad2-341f3bf6822d nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.558735486 +0000 UTC m=+116.430233538 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bzkw" (UniqueName: "kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw") pod "marketplace-operator-547dbd544d-mf247" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.558936 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.058925891 +0000 UTC m=+115.930423943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: W0130 00:12:05.603439 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ebc9fa5_f75b_4468_b4b8_83695dd067b6.slice/crio-737380cb5b0629610404ce614a393e5e873d54eb72e2fdaf90fc41af38ef80be WatchSource:0}: Error finding container 737380cb5b0629610404ce614a393e5e873d54eb72e2fdaf90fc41af38ef80be: Status 404 returned error can't find the container with id 737380cb5b0629610404ce614a393e5e873d54eb72e2fdaf90fc41af38ef80be Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.659465 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.659587 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.159565854 +0000 UTC m=+116.031063916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.659904 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.661029 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.161003789 +0000 UTC m=+116.032501851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.762550 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.762666 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.262642006 +0000 UTC m=+116.134140068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.762805 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.763202 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.26318783 +0000 UTC m=+116.134685912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.864033 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.864244 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.364214202 +0000 UTC m=+116.235712284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.864612 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.865149 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.365126024 +0000 UTC m=+116.236624086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.966573 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.966800 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.466769072 +0000 UTC m=+116.338267134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.967925 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.968341 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.46832795 +0000 UTC m=+116.339826012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.069116 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.069355 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.569309902 +0000 UTC m=+116.440807994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.069663 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.070188 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.570169472 +0000 UTC m=+116.441667554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.171120 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.671016511 +0000 UTC m=+116.542514603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.170898 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.171853 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.172311 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.672292942 +0000 UTC m=+116.543791034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.276073 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.276238 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.776214505 +0000 UTC m=+116.647712577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.276338 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.276747 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.776735897 +0000 UTC m=+116.648233959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.305553 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.305815 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.309801 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.310759 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.310965 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.311101 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.312602 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.313166 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.313688 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.316429 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gdlhx"] Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.317064 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" podStartSLOduration=93.317034386 podStartE2EDuration="1m33.317034386s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:05.261309585 +0000 UTC m=+115.132807717" watchObservedRunningTime="2026-01-30 00:12:06.317034386 +0000 UTC m=+116.188532448" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.323956 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.379000 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.379250 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.879211205 +0000 UTC m=+116.750709277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.379364 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-certs\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.379644 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r822\" (UniqueName: \"kubernetes.io/projected/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-kube-api-access-7r822\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.379819 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.379863 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-node-bootstrap-token\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.380298 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.880281961 +0000 UTC m=+116.751780023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.423512 5103 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-spmxr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.423589 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.423772 5103 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-7csdm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.423829 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.482222 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.483253 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.483315 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.483405 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7r822\" (UniqueName: \"kubernetes.io/projected/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-kube-api-access-7r822\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.483592 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-node-bootstrap-token\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.483740 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-certs\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.485097 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.985029753 +0000 UTC m=+116.856527815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.491893 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.503563 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-node-bootstrap-token\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.506229 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r822\" (UniqueName: \"kubernetes.io/projected/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-kube-api-access-7r822\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.507952 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.516422 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-certs\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.584816 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.584899 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.585199 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.085180685 +0000 UTC m=+116.956678727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.590639 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.643549 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.658245 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: W0130 00:12:06.678501 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b7c825f_c092_4d5b_9a1d_be16df92e5a2.slice/crio-69d284c8f14aa836dbaa76d6d8aa2cb36b6d5967b964cb3f470c8d1981e0ca41 WatchSource:0}: Error finding container 69d284c8f14aa836dbaa76d6d8aa2cb36b6d5967b964cb3f470c8d1981e0ca41: Status 404 returned error can't find the container with id 69d284c8f14aa836dbaa76d6d8aa2cb36b6d5967b964cb3f470c8d1981e0ca41 Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.691616 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.692257 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.192225943 +0000 UTC m=+117.063724025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.794609 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.794958 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.294942277 +0000 UTC m=+117.166440329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: W0130 00:12:06.862118 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb15f695a_0fc1_4ab5_aad2_341f3bf6822d.slice/crio-0368a0c326937f9c7deb7edf4ed88ddf03334595ee1cd83191767d2fb8e30f45 WatchSource:0}: Error finding container 0368a0c326937f9c7deb7edf4ed88ddf03334595ee1cd83191767d2fb8e30f45: Status 404 returned error can't find the container with id 0368a0c326937f9c7deb7edf4ed88ddf03334595ee1cd83191767d2fb8e30f45 Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.896486 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.896743 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.396710638 +0000 UTC m=+117.268208700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.897256 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.897796 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.397771083 +0000 UTC m=+117.269269175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.998522 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:06.998965 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.49894376 +0000 UTC m=+117.370441822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.100258 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.100753 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.600732701 +0000 UTC m=+117.472230763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.202498 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.202837 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.702798829 +0000 UTC m=+117.574296921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.306661 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.307304 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.807274455 +0000 UTC m=+117.678772537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.408385 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.408625 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.908594955 +0000 UTC m=+117.780093007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.408722 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.409197 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.909181889 +0000 UTC m=+117.780679951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.510096 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.510210 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.010191181 +0000 UTC m=+117.881689223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.510349 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.510629 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.010619092 +0000 UTC m=+117.882117144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.611709 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.612173 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.112114796 +0000 UTC m=+117.983612908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.713796 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.714387 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.214359148 +0000 UTC m=+118.085857330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.814944 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.815150 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.315123274 +0000 UTC m=+118.186621316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.815525 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.815902 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.315882643 +0000 UTC m=+118.187380705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.917475 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.917761 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.417719005 +0000 UTC m=+118.289217097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.918279 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.918730 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.418708499 +0000 UTC m=+118.290206591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.019229 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.019551 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.519513386 +0000 UTC m=+118.391011478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.019862 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.020461 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.520437649 +0000 UTC m=+118.391935741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.121722 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.121917 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.621879412 +0000 UTC m=+118.493377504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.122163 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.122754 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.622729292 +0000 UTC m=+118.494227384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.218510 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-2xrjj"] Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.218571 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" event={"ID":"4022194a-f5e9-494f-b079-ddd414c3da50","Type":"ContainerStarted","Data":"4a1663ce5228deaa796f1880984d01701e616d37c69f0f1cd59e42004c093c1c"} Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.218607 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r9ddz"] Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.218710 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.223447 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.223643 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-j77tr" podStartSLOduration=95.223614041 podStartE2EDuration="1m35.223614041s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:06.488923648 +0000 UTC m=+116.360421710" watchObservedRunningTime="2026-01-30 00:12:08.223614041 +0000 UTC m=+118.095112143" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.223996 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.72397092 +0000 UTC m=+118.595469012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.224244 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" podStartSLOduration=95.224227426 podStartE2EDuration="1m35.224227426s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:06.4576584 +0000 UTC m=+116.329156472" watchObservedRunningTime="2026-01-30 00:12:08.224227426 +0000 UTC m=+118.095725528" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.227099 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.229330 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.229661 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.327038 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cf568a51-0f76-4d77-87d4-136b487786a9-tmp-dir\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.327174 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs57g\" (UniqueName: \"kubernetes.io/projected/cf568a51-0f76-4d77-87d4-136b487786a9-kube-api-access-fs57g\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.327220 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf568a51-0f76-4d77-87d4-136b487786a9-config-volume\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.327247 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf568a51-0f76-4d77-87d4-136b487786a9-metrics-tls\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.327402 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.328028 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.827999896 +0000 UTC m=+118.699497978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.428887 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.429209 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.929169572 +0000 UTC m=+118.800667624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.429616 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fs57g\" (UniqueName: \"kubernetes.io/projected/cf568a51-0f76-4d77-87d4-136b487786a9-kube-api-access-fs57g\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.429670 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf568a51-0f76-4d77-87d4-136b487786a9-config-volume\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.429709 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf568a51-0f76-4d77-87d4-136b487786a9-metrics-tls\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.429925 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.430290 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cf568a51-0f76-4d77-87d4-136b487786a9-tmp-dir\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.430315 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.930303809 +0000 UTC m=+118.801801941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.430951 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cf568a51-0f76-4d77-87d4-136b487786a9-tmp-dir\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.431156 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf568a51-0f76-4d77-87d4-136b487786a9-config-volume\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.437448 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf568a51-0f76-4d77-87d4-136b487786a9-metrics-tls\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.455869 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs57g\" (UniqueName: \"kubernetes.io/projected/cf568a51-0f76-4d77-87d4-136b487786a9-kube-api-access-fs57g\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.531459 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.531754 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.031710941 +0000 UTC m=+118.903209023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.532506 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.533255 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.033233538 +0000 UTC m=+118.904731610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.540214 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.634625 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.634878 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.134831305 +0000 UTC m=+119.006329397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.635179 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.635599 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.135583043 +0000 UTC m=+119.007081095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.736685 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.736800 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.23677412 +0000 UTC m=+119.108272172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.737413 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.741359 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.24133265 +0000 UTC m=+119.112830722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.838681 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.838983 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.33894587 +0000 UTC m=+119.210443942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.940905 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.941457 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.441423698 +0000 UTC m=+119.312921840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.042761 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.042943 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.542919642 +0000 UTC m=+119.414417704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.043305 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.043691 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.54368088 +0000 UTC m=+119.415178942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.144991 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.145258 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.645228016 +0000 UTC m=+119.516726078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.145787 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.146247 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.64623031 +0000 UTC m=+119.517728372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.246959 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.247227 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.747185651 +0000 UTC m=+119.618683743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.248882 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.249365 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.749345613 +0000 UTC m=+119.620843675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.297335 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerStarted","Data":"cf440fc95fced9c1dec5f756ce0700f4d01d4bcefdae5034ff9f16546ffccb75"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.297671 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.302257 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.303218 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.303325 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.303420 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.303343 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.303550 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.304558 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.305463 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.305959 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.307540 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.307631 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310218 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310278 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" event={"ID":"c3bbecd7-5e60-4290-bc24-b4f292d0d515","Type":"ContainerStarted","Data":"f75723f85c118908ad0270b5ef4a061e86c4987c9d6676b7ee5a570cf1358a52"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310322 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" event={"ID":"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0","Type":"ContainerStarted","Data":"220e9a40b0e50e9056393153e34715e3753415e89a3a1e0a8cb90d8927b042f1"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310375 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-x6t57" event={"ID":"c5938973-a6f9-4d60-b605-3f02b2c1c84f","Type":"ContainerStarted","Data":"14c110c2aafcebf401f14c4e8482618b6d3c8697a12a7383624870029d5a39de"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310401 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" event={"ID":"e9100695-b78d-4b2f-9cea-9d022064c792","Type":"ContainerStarted","Data":"fcd20598200cbf757c0c2051caf7ebf16a7451c09f1b9792561f7689e329b0b7"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310426 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" event={"ID":"f3b3db2b-ab99-483b-a13c-4947269bc330","Type":"ContainerStarted","Data":"64fbde2915247d404cac68a838f6050ce8b4c6a918187c78d666a0041688dd1b"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310452 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" event={"ID":"fcede4f0-4721-47c1-bc52-b68bf7ad29d4","Type":"ContainerStarted","Data":"7ba2f75bcb6ca4a54baf980dd9b4ccb133b13f4bbb15764f2888c2a87e17a239"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310485 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-qgd5c"] Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.311564 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29495520-x6t57" podStartSLOduration=97.311534403 podStartE2EDuration="1m37.311534403s" podCreationTimestamp="2026-01-30 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:08.672573961 +0000 UTC m=+118.544072023" watchObservedRunningTime="2026-01-30 00:12:09.311534403 +0000 UTC m=+119.183032505" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.311972 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.312247 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.334227 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.340938 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.350369 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.350581 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.85054549 +0000 UTC m=+119.722043562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.351254 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.351654 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.851635917 +0000 UTC m=+119.723133979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.452720 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.452864 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.452901 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.452926 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.452963 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h6wk\" (UniqueName: \"kubernetes.io/projected/10feec13-3e3a-46a2-8fdd-c1098eebd334-kube-api-access-7h6wk\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.452982 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.453140 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.953035739 +0000 UTC m=+119.824533851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453366 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453444 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453484 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-policies\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453508 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453525 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453664 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-dir\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453780 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453907 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453959 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555038 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555151 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-dir\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555204 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555270 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555305 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555358 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555491 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-dir\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555632 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555719 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555828 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555890 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7h6wk\" (UniqueName: \"kubernetes.io/projected/10feec13-3e3a-46a2-8fdd-c1098eebd334-kube-api-access-7h6wk\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555925 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.556004 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.556087 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.556134 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-policies\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.556176 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.557371 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.057343191 +0000 UTC m=+119.928841283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.657698 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.657979 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.157916623 +0000 UTC m=+120.029414715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.658395 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.658945 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.158920327 +0000 UTC m=+120.030418409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.760488 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.760677 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.260640337 +0000 UTC m=+120.132138449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.761117 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.761566 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.261548689 +0000 UTC m=+120.133046771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.862807 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.863822 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.363796151 +0000 UTC m=+120.235294243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.864823 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.865581 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.867277 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.868115 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-policies\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.872858 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.873483 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.875269 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.875474 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.875821 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.875981 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h6wk\" (UniqueName: \"kubernetes.io/projected/10feec13-3e3a-46a2-8fdd-c1098eebd334-kube-api-access-7h6wk\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.876023 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.876168 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.877098 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.965705 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.966114 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.466099285 +0000 UTC m=+120.337597337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.017094 5103 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-spmxr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.017233 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.023445 5103 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-6v8cn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:5443/healthz\": dial tcp 10.217.0.14:5443: connect: connection refused" start-of-body= Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.023535 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" podUID="fcede4f0-4721-47c1-bc52-b68bf7ad29d4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.14:5443/healthz\": dial tcp 10.217.0.14:5443: connect: connection refused" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.032208 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.067127 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.067424 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.567370542 +0000 UTC m=+120.438868634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.068291 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.068709 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.568692474 +0000 UTC m=+120.440190546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.169189 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.169392 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.669347778 +0000 UTC m=+120.540845860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.169607 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.170697 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.669987113 +0000 UTC m=+120.541485165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.272420 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.272703 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.772664336 +0000 UTC m=+120.644162408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.272884 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.273232 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.77321889 +0000 UTC m=+120.644716932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.378856 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.379695 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.879675964 +0000 UTC m=+120.751174026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.480692 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.481154 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.981139367 +0000 UTC m=+120.852637419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.561852 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" event={"ID":"23500895-f472-4de5-afda-f1cc02807ceb","Type":"ContainerStarted","Data":"d2bb55bbf63772c72d65c62fbddf187a43731548106280f84e16786c76288a58"} Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.561918 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.561969 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-rgqmz"] Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.565029 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" podStartSLOduration=97.564997283 podStartE2EDuration="1m37.564997283s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:10.039096397 +0000 UTC m=+119.910594479" watchObservedRunningTime="2026-01-30 00:12:10.564997283 +0000 UTC m=+120.436495375" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.581948 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.582138 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.082114049 +0000 UTC m=+120.953612101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.582228 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.582590 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.08258331 +0000 UTC m=+120.954081362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.683242 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.683503 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.183461489 +0000 UTC m=+121.054959541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.683636 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-service-ca-bundle\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.683690 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-metrics-certs\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.683888 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-default-certificate\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.683931 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc9kk\" (UniqueName: \"kubernetes.io/projected/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-kube-api-access-bc9kk\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.684181 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.684223 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-stats-auth\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.684573 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.184560136 +0000 UTC m=+121.056058248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.784986 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.785138 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.285115097 +0000 UTC m=+121.156613149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.785892 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.785925 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-stats-auth\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.785969 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-service-ca-bundle\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.785991 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-metrics-certs\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786074 5103 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: object "openshift-ingress"/"service-ca-bundle" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786087 5103 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: object "openshift-ingress"/"router-metrics-certs-default" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786153 5103 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: object "openshift-ingress"/"router-stats-default" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786149 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-service-ca-bundle podName:2e4d66cc-52c4-40ae-a23a-4aa4831adfb4 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.286137642 +0000 UTC m=+121.157635694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-service-ca-bundle") pod "router-default-68cf44c8b8-qgd5c" (UID: "2e4d66cc-52c4-40ae-a23a-4aa4831adfb4") : object "openshift-ingress"/"service-ca-bundle" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786441 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-metrics-certs podName:2e4d66cc-52c4-40ae-a23a-4aa4831adfb4 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.286426469 +0000 UTC m=+121.157924581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-metrics-certs") pod "router-default-68cf44c8b8-qgd5c" (UID: "2e4d66cc-52c4-40ae-a23a-4aa4831adfb4") : object "openshift-ingress"/"router-metrics-certs-default" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786483 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-stats-auth podName:2e4d66cc-52c4-40ae-a23a-4aa4831adfb4 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.286453479 +0000 UTC m=+121.157951761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-stats-auth") pod "router-default-68cf44c8b8-qgd5c" (UID: "2e4d66cc-52c4-40ae-a23a-4aa4831adfb4") : object "openshift-ingress"/"router-stats-default" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786513 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.286505061 +0000 UTC m=+121.158003253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.786593 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-default-certificate\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.786625 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bc9kk\" (UniqueName: \"kubernetes.io/projected/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-kube-api-access-bc9kk\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786695 5103 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: object "openshift-ingress"/"router-certs-default" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786764 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-default-certificate podName:2e4d66cc-52c4-40ae-a23a-4aa4831adfb4 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.286737466 +0000 UTC m=+121.158235558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-default-certificate") pod "router-default-68cf44c8b8-qgd5c" (UID: "2e4d66cc-52c4-40ae-a23a-4aa4831adfb4") : object "openshift-ingress"/"router-certs-default" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.800428 5103 projected.go:289] Couldn't get configMap openshift-ingress/kube-root-ca.crt: object "openshift-ingress"/"kube-root-ca.crt" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.800491 5103 projected.go:289] Couldn't get configMap openshift-ingress/openshift-service-ca.crt: object "openshift-ingress"/"openshift-service-ca.crt" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.800505 5103 projected.go:194] Error preparing data for projected volume kube-api-access-bc9kk for pod openshift-ingress/router-default-68cf44c8b8-qgd5c: [object "openshift-ingress"/"kube-root-ca.crt" not registered, object "openshift-ingress"/"openshift-service-ca.crt" not registered] Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.800576 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-kube-api-access-bc9kk podName:2e4d66cc-52c4-40ae-a23a-4aa4831adfb4 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.300551422 +0000 UTC m=+121.172049574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bc9kk" (UniqueName: "kubernetes.io/projected/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-kube-api-access-bc9kk") pod "router-default-68cf44c8b8-qgd5c" (UID: "2e4d66cc-52c4-40ae-a23a-4aa4831adfb4") : [object "openshift-ingress"/"kube-root-ca.crt" not registered, object "openshift-ingress"/"openshift-service-ca.crt" not registered] Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.993528 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.995302 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.495260089 +0000 UTC m=+121.366758181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.040798 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.040895 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.096745 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.097147 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.597131222 +0000 UTC m=+121.468629284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.197546 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.197742 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.697715564 +0000 UTC m=+121.569213626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.198089 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.198421 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.69840606 +0000 UTC m=+121.569904112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.279842 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" event={"ID":"ca146169-65b5-4eed-be41-43bb8bf87656","Type":"ContainerStarted","Data":"c247a57b7fe7f2aa890d312a8303de8bb0e377c2050e84e99b30f9e6da1d45f3"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.279912 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" event={"ID":"22187967-c3cb-4aec-b6d5-65c7c6167554","Type":"ContainerStarted","Data":"5b63698741944fc197cb263f73a75657a3d81eef13d32ee8cbee603537df5169"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.279930 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" event={"ID":"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e","Type":"ContainerStarted","Data":"6249fcb5844b660333c4ac49692eac2cafb185ec4dbbebfcbc2ce3bb1e6f68d6"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.279951 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.279967 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" event={"ID":"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9","Type":"ContainerStarted","Data":"02061e1d5fc241364294c16f2752c64bc77dfc52fe8e426f1bbcf1d06b07d88f"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.280006 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.281620 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.283864 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.285525 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.286962 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.287272 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.287545 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.287922 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.288150 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.288188 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.288654 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289195 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-4rfkh"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289237 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" event={"ID":"aa3b2afd-f2d4-40f0-bbd3-19225d26438e","Type":"ContainerStarted","Data":"a10082817156a05dfaffb5e94545c160e23cbf636e1b055cd5f582f13eeccb23"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289260 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" event={"ID":"aa3b2afd-f2d4-40f0-bbd3-19225d26438e","Type":"ContainerStarted","Data":"b5c30c4a11fe11b38adaf1c964255efbb88e8214b00aab112b2963179e2c1b06"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289278 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-spmxr"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289296 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289313 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-j77tr"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289331 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289345 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289360 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" event={"ID":"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204","Type":"ContainerStarted","Data":"712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289379 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289395 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-dtdff"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289411 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" event={"ID":"91703ab7-2f05-4831-8200-85210adf830b","Type":"ContainerDied","Data":"f0078a0ce155b37c23086d472f5f677a2cdb7136a582b7aeb8db53e9394aa660"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289431 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289447 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" event={"ID":"f80439cc-c38d-4210-a203-f478704d9dcd","Type":"ContainerStarted","Data":"1d2e98ef1dc50c4908e70f14a0f924ff984fd6cbe6d6caca5516013a7e12baab"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289465 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289482 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" event={"ID":"a0ff7eb1-7b00-4318-936e-30862acd97e5","Type":"ContainerDied","Data":"794ade07b1fe5623465f764c5eaf8d3c479eeb7e9a2066ff11ca2f40c30e5324"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289500 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" event={"ID":"72531653-f2c6-4754-8209-24104364d6f4","Type":"ContainerStarted","Data":"0b2dea2b01a00baa58570f13ea4d5c67f2bb5bde5b5e20073a04eaba162eb45a"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289516 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" event={"ID":"3ed247cb-77c1-47fb-ad58-f14f03aae2f2","Type":"ContainerStarted","Data":"010a3e79682217ed5f4858425ee9d8e68d2b2f0b6dedd9af218d4cec3798c424"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289529 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" event={"ID":"0ebc9fa5-f75b-4468-b4b8-83695dd067b6","Type":"ContainerStarted","Data":"737380cb5b0629610404ce614a393e5e873d54eb72e2fdaf90fc41af38ef80be"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289542 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289545 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289556 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" event={"ID":"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06","Type":"ContainerStarted","Data":"22cf5ca5b9dc2b7338a29b6c0ecec87eac0aa4aac8490606aa762bcf17a7311c"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289570 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" event={"ID":"8550f022-16a5-4fac-a94e-fc322ee0cb9d","Type":"ContainerStarted","Data":"7436156915c575beccaacbc400badce8bfcf50425c941304b0e657d7e619767b"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289584 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289598 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" event={"ID":"c9dfcfad-0e85-4b3e-9a33-3729f7033251","Type":"ContainerStarted","Data":"4a24827a85cd26b0f0d53622ffa0da5764d3f74ad95b6d2fec9319059ff15c75"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289609 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" event={"ID":"35998b47-ed37-4a50-9553-18147918d9cb","Type":"ContainerStarted","Data":"6d6114ceb68ae67260e01f25a1b5cc7e5611f1aca85649fc5a25919d41ccae4a"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289621 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" event={"ID":"8dc8aa23-eb1a-486e-9462-499486335cdc","Type":"ContainerStarted","Data":"408e260443325c70f9c09694dafcfc66a246ef2b0a79a37358551cb0bc1e8007"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289633 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" event={"ID":"c3bbecd7-5e60-4290-bc24-b4f292d0d515","Type":"ContainerStarted","Data":"660c21923d02c550d66116b3d77994184dda07eefda1e6d7d5b7b4870b84e0f1"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289645 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289660 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289675 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29495520-x6t57"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289690 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" event={"ID":"179763d8-8dea-40e5-ba89-1a848fbf519a","Type":"ContainerStarted","Data":"aa235fa5321f6a87667237367d6a035c2a4259ba213eb0974341d9e1f7e3562c"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289705 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" event={"ID":"f1c445e1-3a33-419a-bd9a-0314b23539f7","Type":"ContainerStarted","Data":"27c628f9ccc49f2259855656ac2f066826629c44e23db39f2aabb1f7ab48dccb"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289722 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289736 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289752 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7v6vx"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289763 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" event={"ID":"42f46e1b-e6a2-499c-9e01-fe08785a78a4","Type":"ContainerStarted","Data":"a371cabbeee1abe2e2c2ce5fb9e2ceca15f9e6c746f56f73aa9c6ceab42e9720"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289775 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7v6vx" event={"ID":"9bef77c6-141b-4cff-a91d-7515860a6a2a","Type":"ContainerStarted","Data":"d49beb1fb54d8b2a6fe43d988a8cfefa253a3c2b72d058a53b17fe4322292b64"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289785 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-dtdff" event={"ID":"02410934-0df2-4e17-9042-91fa47becda6","Type":"ContainerStarted","Data":"5c0cd996ce9c244e51448d155956bda81f09898040572971daf165985965f737"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289799 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-5tp7b"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289810 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" event={"ID":"ca146169-65b5-4eed-be41-43bb8bf87656","Type":"ContainerStarted","Data":"4acbf56ab49e55320969361efd63d9b6fceec3394fc78dbf3c14fa0df602b17a"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289822 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4ltx6"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289833 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289843 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" event={"ID":"a3a441e4-5ade-4309-938a-0f4fe130a721","Type":"ContainerStarted","Data":"3ff899e4bcbcb1d73af6c5cd292c3cb1cdff3b5962a0453699a9d1ec5f69e662"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289853 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" event={"ID":"4b196a79-ecff-4ec8-8338-33436cfd3dcc","Type":"ContainerStarted","Data":"2236dd21f8e2bee83df37b2fa78eb0cbaf3b44b8ed4703a935e77c81ecdb04a4"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289866 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289878 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289890 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-69ms4"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299136 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299297 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-default-certificate\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.299335 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.79931215 +0000 UTC m=+121.670810212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299463 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-tmp-dir\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299574 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299720 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-stats-auth\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299905 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrqpf\" (UniqueName: \"kubernetes.io/projected/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-kube-api-access-wrqpf\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299990 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-metrics-tls\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.300045 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.800033978 +0000 UTC m=+121.671532040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.300100 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-service-ca-bundle\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.300139 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-metrics-certs\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.304283 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.310742 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-service-ca-bundle\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.316715 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-default-certificate\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.317994 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-stats-auth\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.320214 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-metrics-certs\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402095 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.402211 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.902187478 +0000 UTC m=+121.773685530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402310 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-tmp-dir\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402393 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402470 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wrqpf\" (UniqueName: \"kubernetes.io/projected/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-kube-api-access-wrqpf\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402492 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-metrics-tls\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402610 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bc9kk\" (UniqueName: \"kubernetes.io/projected/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-kube-api-access-bc9kk\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402900 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-tmp-dir\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.403363 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.903346666 +0000 UTC m=+121.774844788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.409469 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-metrics-tls\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.409514 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc9kk\" (UniqueName: \"kubernetes.io/projected/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-kube-api-access-bc9kk\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.436997 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrqpf\" (UniqueName: \"kubernetes.io/projected/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-kube-api-access-wrqpf\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.503900 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.504076 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.00403772 +0000 UTC m=+121.875535782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.504341 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.504753 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.004743177 +0000 UTC m=+121.876241229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.566444 5103 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-7csdm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.566869 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.592943 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" event={"ID":"fcede4f0-4721-47c1-bc52-b68bf7ad29d4","Type":"ContainerStarted","Data":"de5f2e232def9eb29460cb73b3b6a441cefc8bedfec4bbb3082f3590c17d13f5"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.593213 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.593282 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-n8bvp" event={"ID":"2b7c825f-c092-4d5b-9a1d-be16df92e5a2","Type":"ContainerStarted","Data":"69d284c8f14aa836dbaa76d6d8aa2cb36b6d5967b964cb3f470c8d1981e0ca41"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.593307 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerStarted","Data":"0368a0c326937f9c7deb7edf4ed88ddf03334595ee1cd83191767d2fb8e30f45"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.593325 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2mh7r"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.597631 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.597818 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.597951 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.608967 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.609101 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.10906829 +0000 UTC m=+121.980566342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.609739 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-plugins-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.609834 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-mountpoint-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.609872 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.609998 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-csi-data-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.610017 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-socket-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.610036 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grlk8\" (UniqueName: \"kubernetes.io/projected/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-kube-api-access-grlk8\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.610090 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-registration-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.612415 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.112398601 +0000 UTC m=+121.983896653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.617620 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.618566 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.630020 5103 patch_prober.go:28] interesting pod/console-operator-67c89758df-4rfkh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.630104 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" podUID="bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.675592 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.676730 5103 patch_prober.go:28] interesting pod/console-operator-67c89758df-4rfkh container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.676797 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" podUID="bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.678246 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59228: no serving certificate available for the kubelet" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685140 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685186 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" event={"ID":"f3b3db2b-ab99-483b-a13c-4947269bc330","Type":"ContainerStarted","Data":"25ab8682e26ae83def4771bae81411f562ed9fea06908780b37e0a89075a13b8"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685210 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" event={"ID":"23500895-f472-4de5-afda-f1cc02807ceb","Type":"ContainerStarted","Data":"bd870aed1e11cb9ea1fecd7f733bcfe8e65906b77a196b0d59a7946da4604f87"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685226 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gdlhx" event={"ID":"cf568a51-0f76-4d77-87d4-136b487786a9","Type":"ContainerStarted","Data":"22e1cc1ed66ae3403c9a5ecd0603d2ba86d3c9b62b69ff8de7d68025c41fd882"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685245 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685266 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685279 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gdlhx"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685291 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" event={"ID":"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e","Type":"ContainerStarted","Data":"00c56f5736ab1c2203b6302368b034ae3ac0d41bb133d09d39041cb5a15bbfcd"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685309 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" event={"ID":"10feec13-3e3a-46a2-8fdd-c1098eebd334","Type":"ContainerStarted","Data":"e3d46683d3f3d86228a063dcb193d36e8067e6dad542d18de17ac86ad6dc3b86"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685323 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jfm6p"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685335 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mf247"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685346 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685357 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-69ms4"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685367 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2mh7r"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685376 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685387 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685397 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685406 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-rgqmz"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685417 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r9ddz"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685426 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685442 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cnbd2"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.687269 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.692701 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.694946 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.695166 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.695923 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.696962 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" podStartSLOduration=98.696948314 podStartE2EDuration="1m38.696948314s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:03.686756199 +0000 UTC m=+113.558254261" watchObservedRunningTime="2026-01-30 00:12:11.696948314 +0000 UTC m=+121.568446426" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.710574 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.710781 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.210765979 +0000 UTC m=+122.082264031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711312 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2bcf9d4-8f8c-4722-95c8-03ff81e4b300-cert\") pod \"ingress-canary-2mh7r\" (UID: \"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300\") " pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711378 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-csi-data-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711404 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-socket-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711426 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-grlk8\" (UniqueName: \"kubernetes.io/projected/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-kube-api-access-grlk8\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711475 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-registration-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711613 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-plugins-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711672 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-mountpoint-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711703 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711725 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2lz\" (UniqueName: \"kubernetes.io/projected/f2bcf9d4-8f8c-4722-95c8-03ff81e4b300-kube-api-access-5m2lz\") pod \"ingress-canary-2mh7r\" (UID: \"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300\") " pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.713191 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-csi-data-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.713424 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-plugins-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.713424 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-registration-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.713582 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-socket-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.713693 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-mountpoint-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.713946 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.213929366 +0000 UTC m=+122.085427488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.736452 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-grlk8\" (UniqueName: \"kubernetes.io/projected/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-kube-api-access-grlk8\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.770747 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-clmhf"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771089 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-spmxr"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771105 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-6z46s"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771113 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-2xrjj"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771124 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771133 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-4rfkh"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771141 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771156 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29495520-x6t57"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771169 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-j77tr"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771176 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771198 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771211 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771232 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4ltx6"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771278 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771289 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771298 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-5tp7b"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.770981 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771532 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.774113 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.776766 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59232: no serving certificate available for the kubelet" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.786661 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.804065 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812422 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812556 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbkqv\" (UniqueName: \"kubernetes.io/projected/e1617c52-82bc-4480-9bc4-e37e0264876e-kube-api-access-lbkqv\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812587 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1617c52-82bc-4480-9bc4-e37e0264876e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812658 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e1617c52-82bc-4480-9bc4-e37e0264876e-ready\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.812721 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.312690583 +0000 UTC m=+122.184188625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812801 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812848 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m2lz\" (UniqueName: \"kubernetes.io/projected/f2bcf9d4-8f8c-4722-95c8-03ff81e4b300-kube-api-access-5m2lz\") pod \"ingress-canary-2mh7r\" (UID: \"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300\") " pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812948 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e1617c52-82bc-4480-9bc4-e37e0264876e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.813026 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2bcf9d4-8f8c-4722-95c8-03ff81e4b300-cert\") pod \"ingress-canary-2mh7r\" (UID: \"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300\") " pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.814385 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.314372704 +0000 UTC m=+122.185870746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.815581 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.815623 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" event={"ID":"8dc8aa23-eb1a-486e-9462-499486335cdc","Type":"ContainerStarted","Data":"7104caf1b88f03ec308833b1963b9304d7cb0c06133c827664919aac59c10ed2"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.818586 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.831777 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7v6vx"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.834154 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.835428 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2bcf9d4-8f8c-4722-95c8-03ff81e4b300-cert\") pod \"ingress-canary-2mh7r\" (UID: \"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300\") " pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.855584 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m2lz\" (UniqueName: \"kubernetes.io/projected/f2bcf9d4-8f8c-4722-95c8-03ff81e4b300-kube-api-access-5m2lz\") pod \"ingress-canary-2mh7r\" (UID: \"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300\") " pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.858196 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" event={"ID":"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4","Type":"ContainerStarted","Data":"cdae0d11c631ab549663e24b81f7bab5a9fd9beec8657d9a2ba7e1458b493106"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.865435 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.875094 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.883779 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59234: no serving certificate available for the kubelet" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.894109 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.894515 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.897067 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-dtdff"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.897830 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.898463 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.899297 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" event={"ID":"aa3b2afd-f2d4-40f0-bbd3-19225d26438e","Type":"ContainerStarted","Data":"1f238a546c0532137a93386abdf3038e4d5be698d7c8bcb43d71c649c8772903"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.899619 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.904911 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.906079 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.915502 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.916326 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e1617c52-82bc-4480-9bc4-e37e0264876e-ready\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.916399 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e1617c52-82bc-4480-9bc4-e37e0264876e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.916429 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" podStartSLOduration=98.916418611 podStartE2EDuration="1m38.916418611s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:06.436690551 +0000 UTC m=+116.308188623" watchObservedRunningTime="2026-01-30 00:12:11.916418611 +0000 UTC m=+121.787916853" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.916556 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lbkqv\" (UniqueName: \"kubernetes.io/projected/e1617c52-82bc-4480-9bc4-e37e0264876e-kube-api-access-lbkqv\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.916595 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1617c52-82bc-4480-9bc4-e37e0264876e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.916731 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.416715618 +0000 UTC m=+122.288213670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.916945 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e1617c52-82bc-4480-9bc4-e37e0264876e-ready\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.917704 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1617c52-82bc-4480-9bc4-e37e0264876e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.918581 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e1617c52-82bc-4480-9bc4-e37e0264876e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.918798 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" podStartSLOduration=99.918790559 podStartE2EDuration="1m39.918790559s" podCreationTimestamp="2026-01-30 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:06.475738779 +0000 UTC m=+116.347236841" watchObservedRunningTime="2026-01-30 00:12:11.918790559 +0000 UTC m=+121.790288611" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.919953 5103 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-6v8cn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:5443/healthz\": dial tcp 10.217.0.14:5443: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.919995 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" podUID="fcede4f0-4721-47c1-bc52-b68bf7ad29d4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.14:5443/healthz\": dial tcp 10.217.0.14:5443: connect: connection refused" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.920974 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.921163 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.923062 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mf247"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.930242 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.932796 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.942833 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" podStartSLOduration=98.942814132 podStartE2EDuration="1m38.942814132s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:08.693915819 +0000 UTC m=+118.565413891" watchObservedRunningTime="2026-01-30 00:12:11.942814132 +0000 UTC m=+121.814312184" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.949729 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" podStartSLOduration=98.949710129 podStartE2EDuration="1m38.949710129s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:08.708212356 +0000 UTC m=+118.579710418" watchObservedRunningTime="2026-01-30 00:12:11.949710129 +0000 UTC m=+121.821208181" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.953558 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gdlhx"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.958279 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r9ddz"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.965730 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbkqv\" (UniqueName: \"kubernetes.io/projected/e1617c52-82bc-4480-9bc4-e37e0264876e-kube-api-access-lbkqv\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.975005 5103 patch_prober.go:28] interesting pod/console-operator-67c89758df-4rfkh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.975203 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" podUID="bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:11.999637 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59236: no serving certificate available for the kubelet" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.020464 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.025321 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.525299454 +0000 UTC m=+122.396797696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.044294 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.093310 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59252: no serving certificate available for the kubelet" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.126257 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.126910 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.626888261 +0000 UTC m=+122.498386313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.128145 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" podStartSLOduration=99.128120681 podStartE2EDuration="1m39.128120681s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:11.637159982 +0000 UTC m=+121.508658064" watchObservedRunningTime="2026-01-30 00:12:12.128120681 +0000 UTC m=+121.999618733" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.133925 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.144360 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" podStartSLOduration=99.144332704 podStartE2EDuration="1m39.144332704s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:11.842429825 +0000 UTC m=+121.713927877" watchObservedRunningTime="2026-01-30 00:12:12.144332704 +0000 UTC m=+122.015830756" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.158311 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" podStartSLOduration=99.158254892 podStartE2EDuration="1m39.158254892s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:11.934347366 +0000 UTC m=+121.805845428" watchObservedRunningTime="2026-01-30 00:12:12.158254892 +0000 UTC m=+122.029752944" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.160924 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" podStartSLOduration=99.160912767 podStartE2EDuration="1m39.160912767s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:11.972344339 +0000 UTC m=+121.843842391" watchObservedRunningTime="2026-01-30 00:12:12.160912767 +0000 UTC m=+122.032410819" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.171062 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-rgqmz"] Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.183035 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59268: no serving certificate available for the kubelet" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.225868 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59272: no serving certificate available for the kubelet" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.227898 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.228369 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.728356154 +0000 UTC m=+122.599854206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.329347 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.329617 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.829578142 +0000 UTC m=+122.701076194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.330234 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.330708 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.830691329 +0000 UTC m=+122.702189381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.349432 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-69ms4"] Jan 30 00:12:12 crc kubenswrapper[5103]: W0130 00:12:12.380713 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe0b1692_3dd7_4854_b53d_c32cd8162e1b.slice/crio-52a3d185ff100104821754eae9326aff61b04cdee822a2e7764f463d2e3f16d1 WatchSource:0}: Error finding container 52a3d185ff100104821754eae9326aff61b04cdee822a2e7764f463d2e3f16d1: Status 404 returned error can't find the container with id 52a3d185ff100104821754eae9326aff61b04cdee822a2e7764f463d2e3f16d1 Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.386588 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59288: no serving certificate available for the kubelet" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.410890 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2mh7r"] Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.435143 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.435258 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.935208176 +0000 UTC m=+122.806706228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.435997 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.436731 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.936719843 +0000 UTC m=+122.808217895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.537356 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.537537 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.03750963 +0000 UTC m=+122.909007692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.538285 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.538840 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.038830952 +0000 UTC m=+122.910329004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.639089 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.639313 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.13927821 +0000 UTC m=+123.010776262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.640378 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.640836 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.140825738 +0000 UTC m=+123.012323790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.741398 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.741586 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.241556783 +0000 UTC m=+123.113054845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.742229 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.742568 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.242553128 +0000 UTC m=+123.114051170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.775991 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.776101 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.843758 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.844008 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.34397062 +0000 UTC m=+123.215468682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.844476 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.844914 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.344895642 +0000 UTC m=+123.216393694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.907233 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" event={"ID":"f1c445e1-3a33-419a-bd9a-0314b23539f7","Type":"ContainerStarted","Data":"a6790b6b84b98ac0715e8ae1ea57b4ff27489c9d8bc09d2a4c34faaa2d387839"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.909014 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" event={"ID":"72531653-f2c6-4754-8209-24104364d6f4","Type":"ContainerStarted","Data":"755b769872c4c1c8e1133203e05165f9afa0d6264a8a7a7b26a990d175725976"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.910573 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" event={"ID":"3ed247cb-77c1-47fb-ad58-f14f03aae2f2","Type":"ContainerStarted","Data":"5618a8b65c664dc70f879be6c163a7f5280150d589d52cd4507f3676ec01a1f2"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.919784 5103 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-7csdm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": context deadline exceeded" start-of-body= Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.919851 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": context deadline exceeded" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.925425 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" event={"ID":"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6","Type":"ContainerStarted","Data":"e92342672bb0b68b320b38b09be1530158a324962492b78f59e6e5cfc7c62ed0"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.927397 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" event={"ID":"91703ab7-2f05-4831-8200-85210adf830b","Type":"ContainerStarted","Data":"c6b51b19b3c7356936c3b9cae768bc16d2eb83e3dd1e5a42b4880e28b2d04278"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.928569 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" event={"ID":"8550f022-16a5-4fac-a94e-fc322ee0cb9d","Type":"ContainerStarted","Data":"77c121a946385b331f6aa376bc2d4849ed3f9628c7c02c301ca3ebdbf4d821b3"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.929879 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2mh7r" event={"ID":"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300","Type":"ContainerStarted","Data":"01b43b80f0bd1c2e8e1b9fdca959751cbec342405793453756f645cf5c5c6360"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.935689 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" event={"ID":"35998b47-ed37-4a50-9553-18147918d9cb","Type":"ContainerStarted","Data":"7eed29d2d7d4583b9f952b68e6b57b89a754060c555d1c2c10eed5681fb2fe94"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.936864 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7v6vx" event={"ID":"9bef77c6-141b-4cff-a91d-7515860a6a2a","Type":"ContainerStarted","Data":"9aaae1a0beeed6aaacfd1b9d0998714ed50dadbd366711ecbf6866a2a127e075"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.937643 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" event={"ID":"fe0b1692-3dd7-4854-b53d-c32cd8162e1b","Type":"ContainerStarted","Data":"52a3d185ff100104821754eae9326aff61b04cdee822a2e7764f463d2e3f16d1"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.945446 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" event={"ID":"e9100695-b78d-4b2f-9cea-9d022064c792","Type":"ContainerStarted","Data":"618c988c78931a81603820eb3e891184d2a3644eb79244247e09c2d0c408abce"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.945800 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.946376 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.446348115 +0000 UTC m=+123.317846187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.949354 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-dtdff" event={"ID":"02410934-0df2-4e17-9042-91fa47becda6","Type":"ContainerStarted","Data":"4e9ce9a68542016b6f88a9291dacc4306623e88f04c8d4073cc32aca27ce9149"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.952973 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" event={"ID":"e1617c52-82bc-4480-9bc4-e37e0264876e","Type":"ContainerStarted","Data":"973863cd6d6133ec3ff6a7fd2a13f58a8dd52f466be2fd39e8f85026734e7547"} Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.047828 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.048262 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.548243139 +0000 UTC m=+123.419741181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.071423 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59302: no serving certificate available for the kubelet" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.149596 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.149795 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.649762063 +0000 UTC m=+123.521260115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.150177 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.150588 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.650572303 +0000 UTC m=+123.522070355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.251528 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.251821 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.75177129 +0000 UTC m=+123.623269342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.252148 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.252665 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.752637601 +0000 UTC m=+123.624135843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.301570 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.301629 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.302169 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.311484 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.362190 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.362294 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.862268653 +0000 UTC m=+123.733766705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.366622 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.368376 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.868360571 +0000 UTC m=+123.739858713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.372319 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" podStartSLOduration=101.372306946 podStartE2EDuration="1m41.372306946s" podCreationTimestamp="2026-01-30 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:13.325177992 +0000 UTC m=+123.196676054" watchObservedRunningTime="2026-01-30 00:12:13.372306946 +0000 UTC m=+123.243804988" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.467937 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.468205 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.968176964 +0000 UTC m=+123.839675016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.468425 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.468955 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.968947923 +0000 UTC m=+123.840445975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.573399 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.574925 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.074904745 +0000 UTC m=+123.946402807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.676072 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.676579 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.176562532 +0000 UTC m=+124.048060584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.777194 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.777470 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.277455211 +0000 UTC m=+124.148953263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.882191 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.882495 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.382482491 +0000 UTC m=+124.253980543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.900116 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.925189 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" podStartSLOduration=100.925162457 podStartE2EDuration="1m40.925162457s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:13.43960331 +0000 UTC m=+123.311101372" watchObservedRunningTime="2026-01-30 00:12:13.925162457 +0000 UTC m=+123.796660509" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.970304 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" event={"ID":"179763d8-8dea-40e5-ba89-1a848fbf519a","Type":"ContainerStarted","Data":"8dbe398184b3186e777d5d6b0a4e6d06823869a17c1e99e54f23987fc377abdf"} Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.974600 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" event={"ID":"a3a441e4-5ade-4309-938a-0f4fe130a721","Type":"ContainerStarted","Data":"6b4cfcefa38b9fd4bc838a28a7dc091b8b621768de6f0542beef2898a52448ae"} Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.981041 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" event={"ID":"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06","Type":"ContainerStarted","Data":"9279ef7847ed52983018c21d41a7442c838f79e9c0933fbc64d3162dda65f4ed"} Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.982450 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" event={"ID":"4b196a79-ecff-4ec8-8338-33436cfd3dcc","Type":"ContainerStarted","Data":"62a838e8656494d098d13d74cc52b6fc0c79efb8bb8f5baac5cd69207bbc9cd2"} Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.983008 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.983185 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.483146545 +0000 UTC m=+124.354644647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.983852 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.984293 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.484283513 +0000 UTC m=+124.355781565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.988344 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" event={"ID":"42f46e1b-e6a2-499c-9e01-fe08785a78a4","Type":"ContainerStarted","Data":"9af11272942ca42d39709cfecca3dc78ece9ca80c014660fcc4006c573c808cb"} Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.000309 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" event={"ID":"c9dfcfad-0e85-4b3e-9a33-3729f7033251","Type":"ContainerStarted","Data":"0088d22741463fcba1136fa3da0b16a34b1e45195cc54544456bfe1bacf22409"} Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.014254 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.014301 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.018245 5103 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-2xrjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.018398 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" podUID="91703ab7-2f05-4831-8200-85210adf830b" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.018245 5103 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-cw4vd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.018851 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" podUID="42f46e1b-e6a2-499c-9e01-fe08785a78a4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.018303 5103 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-kg2rz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.019076 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" podUID="a3a441e4-5ade-4309-938a-0f4fe130a721" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.057100 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-dtdff" podStartSLOduration=101.05707833 podStartE2EDuration="1m41.05707833s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.048532462 +0000 UTC m=+123.920030524" watchObservedRunningTime="2026-01-30 00:12:14.05707833 +0000 UTC m=+123.928576382" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.071330 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" podStartSLOduration=101.071306915 podStartE2EDuration="1m41.071306915s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.071005308 +0000 UTC m=+123.942503370" watchObservedRunningTime="2026-01-30 00:12:14.071306915 +0000 UTC m=+123.942804967" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.088082 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.088382 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.588352009 +0000 UTC m=+124.459850061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.102098 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" podStartSLOduration=101.102080912 podStartE2EDuration="1m41.102080912s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.099226063 +0000 UTC m=+123.970724135" watchObservedRunningTime="2026-01-30 00:12:14.102080912 +0000 UTC m=+123.973578964" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.171699 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" podStartSLOduration=101.171678232 podStartE2EDuration="1m41.171678232s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.136064217 +0000 UTC m=+124.007562269" watchObservedRunningTime="2026-01-30 00:12:14.171678232 +0000 UTC m=+124.043176284" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.174729 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" podStartSLOduration=101.174714716 podStartE2EDuration="1m41.174714716s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.172188444 +0000 UTC m=+124.043686506" watchObservedRunningTime="2026-01-30 00:12:14.174714716 +0000 UTC m=+124.046212768" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.179220 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.193842 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.194487 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.694470215 +0000 UTC m=+124.565968267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.195873 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-7v6vx" podStartSLOduration=101.195857339 podStartE2EDuration="1m41.195857339s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.195463329 +0000 UTC m=+124.066961391" watchObservedRunningTime="2026-01-30 00:12:14.195857339 +0000 UTC m=+124.067355391" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.241830 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" podStartSLOduration=101.241795694 podStartE2EDuration="1m41.241795694s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.223271694 +0000 UTC m=+124.094769747" watchObservedRunningTime="2026-01-30 00:12:14.241795694 +0000 UTC m=+124.113293746" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.274400 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" podStartSLOduration=101.274364725 podStartE2EDuration="1m41.274364725s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.265771726 +0000 UTC m=+124.137269778" watchObservedRunningTime="2026-01-30 00:12:14.274364725 +0000 UTC m=+124.145862777" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.274734 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" podStartSLOduration=101.274729344 podStartE2EDuration="1m41.274729344s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.246704803 +0000 UTC m=+124.118202875" watchObservedRunningTime="2026-01-30 00:12:14.274729344 +0000 UTC m=+124.146227396" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.297783 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.298758 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.798741116 +0000 UTC m=+124.670239168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.400794 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.401270 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.901251055 +0000 UTC m=+124.772749107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.411992 5103 ???:1] "http: TLS handshake error from 192.168.126.11:51350: no serving certificate available for the kubelet" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.478253 5103 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-cw4vd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.478350 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" podUID="42f46e1b-e6a2-499c-9e01-fe08785a78a4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.503578 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.503748 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.003721803 +0000 UTC m=+124.875219855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.504412 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.505081 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.005036785 +0000 UTC m=+124.876535057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.606815 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.607128 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.107084332 +0000 UTC m=+124.978582384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.607807 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.608269 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.108260861 +0000 UTC m=+124.979758913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.709809 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.710043 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.210002621 +0000 UTC m=+125.081500803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.710639 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.711132 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.211110188 +0000 UTC m=+125.082608440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.718857 5103 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-kg2rz container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.718927 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" podUID="a3a441e4-5ade-4309-938a-0f4fe130a721" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.812253 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.812479 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.312443378 +0000 UTC m=+125.183941440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.812564 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.813388 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.31337716 +0000 UTC m=+125.184875232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.913915 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.914162 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.414125296 +0000 UTC m=+125.285623348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.914600 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.915025 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.415006628 +0000 UTC m=+125.286504680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.007757 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerStarted","Data":"9d8321d26701e84b2172ecd6b861bb6b29cc5de963380d89851b5cc503a53bec"} Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.009625 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gdlhx" event={"ID":"cf568a51-0f76-4d77-87d4-136b487786a9","Type":"ContainerStarted","Data":"fe1522e7fd6b0dc59e87655bc9973e6cc8f2b63e0ef1b899ac92c61aa6c3e586"} Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.011074 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" event={"ID":"0ebc9fa5-f75b-4468-b4b8-83695dd067b6","Type":"ContainerStarted","Data":"363aa2140ea2ee68e8f8de3bbe0adcb234b6544f48975b6c100141948f6105fe"} Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.012831 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-n8bvp" event={"ID":"2b7c825f-c092-4d5b-9a1d-be16df92e5a2","Type":"ContainerStarted","Data":"957df307ad25860ce5b36830bfbac9760d8f69ed0d321bf1012ad558ae18cce1"} Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.014680 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" event={"ID":"c3bbecd7-5e60-4290-bc24-b4f292d0d515","Type":"ContainerStarted","Data":"d171d9b792326634b76b70cb6545e3ba503abff9493d3c9455ccf9759920c60c"} Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.015501 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.015976 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.515947708 +0000 UTC m=+125.387445760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.016305 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.016452 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" event={"ID":"f3b3db2b-ab99-483b-a13c-4947269bc330","Type":"ContainerStarted","Data":"d649bb0db8fba7081a8b8f035d7fc4386fc27011dc5e9a81a9833393d61535cd"} Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.016996 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.516974523 +0000 UTC m=+125.388472575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.018965 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" event={"ID":"ca146169-65b5-4eed-be41-43bb8bf87656","Type":"ContainerStarted","Data":"c3e8f0be8e6159ecb2ddd32ede292413a575d7d9482c4a2b5e7ec1b275b6f48b"} Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.117356 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.118321 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.618285683 +0000 UTC m=+125.489783735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.136591 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" podStartSLOduration=102.136567347 podStartE2EDuration="1m42.136567347s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:15.133444521 +0000 UTC m=+125.004942583" watchObservedRunningTime="2026-01-30 00:12:15.136567347 +0000 UTC m=+125.008065399" Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.191198 5103 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-kg2rz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.191262 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" podUID="a3a441e4-5ade-4309-938a-0f4fe130a721" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.191721 5103 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-2xrjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.191768 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" podUID="91703ab7-2f05-4831-8200-85210adf830b" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.191802 5103 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-cw4vd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.191825 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" podUID="42f46e1b-e6a2-499c-9e01-fe08785a78a4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.219905 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.221214 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.721196711 +0000 UTC m=+125.592694833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.320968 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.321311 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.821263471 +0000 UTC m=+125.692761523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.322098 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.322374 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.822361147 +0000 UTC m=+125.693859199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.423351 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.423674 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.923658486 +0000 UTC m=+125.795156538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.525782 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.526662 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.026648917 +0000 UTC m=+125.898146969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.627610 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.627803 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.127773132 +0000 UTC m=+125.999271184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.628414 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.628692 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.128683614 +0000 UTC m=+126.000181666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.730204 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.730639 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.230622559 +0000 UTC m=+126.102120611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.832258 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.832801 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.332763378 +0000 UTC m=+126.204261430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.933284 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.933521 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.433482644 +0000 UTC m=+126.304980706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.026609 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" event={"ID":"a0ff7eb1-7b00-4318-936e-30862acd97e5","Type":"ContainerStarted","Data":"9dca98504341e27a03a8ff78c028fadcfc5dd5b0f249cbaf9c99b9d858eb8d3e"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.028235 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" event={"ID":"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4","Type":"ContainerStarted","Data":"39ec93c3402e2db20ced5a9fca981b0a1a1ef4aa1f327beec0e10ddfc1ade594"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.029871 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" event={"ID":"10feec13-3e3a-46a2-8fdd-c1098eebd334","Type":"ContainerStarted","Data":"661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.031718 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" event={"ID":"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6","Type":"ContainerStarted","Data":"0a508e2b47be343d36f749765f082787a3574c8316fa9a174514a6459bd0c5ad"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.033971 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" event={"ID":"8550f022-16a5-4fac-a94e-fc322ee0cb9d","Type":"ContainerStarted","Data":"9b85284bd66712759742d88c8fb923f2f77677b1a100552dff1ca877e0834c77"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.034958 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.035454 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.535436769 +0000 UTC m=+126.406934821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.038541 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.040581 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7db8fb50f766d64858fb9c23c921f7327de27610f6bcaf84791914b161dde1c5"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.042585 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" event={"ID":"e1617c52-82bc-4480-9bc4-e37e0264876e","Type":"ContainerStarted","Data":"6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.058183 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.059426 5103 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-mf247 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.059505 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.098559 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" podStartSLOduration=103.098539381 podStartE2EDuration="1m43.098539381s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.096893231 +0000 UTC m=+125.968391283" watchObservedRunningTime="2026-01-30 00:12:16.098539381 +0000 UTC m=+125.970037433" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.099348 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" podStartSLOduration=103.09934077 podStartE2EDuration="1m43.09934077s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:15.496481764 +0000 UTC m=+125.367979826" watchObservedRunningTime="2026-01-30 00:12:16.09934077 +0000 UTC m=+125.970838822" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.136296 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.136483 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.636448441 +0000 UTC m=+126.507946493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.150112 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podStartSLOduration=103.150097132 podStartE2EDuration="1m43.150097132s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.148528894 +0000 UTC m=+126.020026946" watchObservedRunningTime="2026-01-30 00:12:16.150097132 +0000 UTC m=+126.021595184" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.175520 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" podStartSLOduration=103.175491249 podStartE2EDuration="1m43.175491249s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.166225484 +0000 UTC m=+126.037723536" watchObservedRunningTime="2026-01-30 00:12:16.175491249 +0000 UTC m=+126.046989301" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.237771 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.238171 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.7381583 +0000 UTC m=+126.609656352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.327405 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.327467 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.338826 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.339077 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.839028139 +0000 UTC m=+126.710526191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.339626 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.340027 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.840011643 +0000 UTC m=+126.711509695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.416259 5103 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-cw4vd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.416459 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" podUID="42f46e1b-e6a2-499c-9e01-fe08785a78a4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.416956 5103 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-kg2rz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.417128 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" podUID="a3a441e4-5ade-4309-938a-0f4fe130a721" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.438542 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podStartSLOduration=103.438525065 podStartE2EDuration="1m43.438525065s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.436161917 +0000 UTC m=+126.307659959" watchObservedRunningTime="2026-01-30 00:12:16.438525065 +0000 UTC m=+126.310023117" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.440863 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.441380 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.941363914 +0000 UTC m=+126.812861956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.491104 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" podStartSLOduration=103.491085991 podStartE2EDuration="1m43.491085991s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.470290786 +0000 UTC m=+126.341788828" watchObservedRunningTime="2026-01-30 00:12:16.491085991 +0000 UTC m=+126.362584043" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.491589 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.491909 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.493423 5103 patch_prober.go:28] interesting pod/apiserver-8596bd845d-6z46s container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.493510 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" podUID="a0ff7eb1-7b00-4318-936e-30862acd97e5" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.520946 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" podStartSLOduration=103.520926815 podStartE2EDuration="1m43.520926815s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.518899676 +0000 UTC m=+126.390397738" watchObservedRunningTime="2026-01-30 00:12:16.520926815 +0000 UTC m=+126.392424867" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.521517 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-n8bvp" podStartSLOduration=16.521509799 podStartE2EDuration="16.521509799s" podCreationTimestamp="2026-01-30 00:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.491680875 +0000 UTC m=+126.363178927" watchObservedRunningTime="2026-01-30 00:12:16.521509799 +0000 UTC m=+126.393007851" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.547769 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.552495 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.052474341 +0000 UTC m=+126.923972473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.603348 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=50.603325336 podStartE2EDuration="50.603325336s" podCreationTimestamp="2026-01-30 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.601745147 +0000 UTC m=+126.473243189" watchObservedRunningTime="2026-01-30 00:12:16.603325336 +0000 UTC m=+126.474823388" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.604133 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" podStartSLOduration=103.604127745 podStartE2EDuration="1m43.604127745s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.547165862 +0000 UTC m=+126.418663924" watchObservedRunningTime="2026-01-30 00:12:16.604127745 +0000 UTC m=+126.475625797" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.619784 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.627258 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.627330 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.647205 5103 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-mf247 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.647280 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.649039 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.649220 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.149202899 +0000 UTC m=+127.020700951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.649574 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.649904 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.149894096 +0000 UTC m=+127.021392148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.751476 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.751794 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.25177611 +0000 UTC m=+127.123274162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.853615 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.854256 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.354222737 +0000 UTC m=+127.225720979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.955181 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.955414 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.455378553 +0000 UTC m=+127.326876605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.955874 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.956261 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.456253894 +0000 UTC m=+127.327751946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.000088 5103 ???:1] "http: TLS handshake error from 192.168.126.11:51352: no serving certificate available for the kubelet" Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.037308 5103 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-clmhf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]log ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]etcd ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/max-in-flight-filter ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 00:12:17 crc kubenswrapper[5103]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 00:12:17 crc kubenswrapper[5103]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 00:12:17 crc kubenswrapper[5103]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 00:12:17 crc kubenswrapper[5103]: livez check failed Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.037400 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" podUID="e9100695-b78d-4b2f-9cea-9d022064c792" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.057159 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.057454 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.55741636 +0000 UTC m=+127.428914412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.058149 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.058546 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.558536687 +0000 UTC m=+127.430034739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.063415 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" event={"ID":"3ed247cb-77c1-47fb-ad58-f14f03aae2f2","Type":"ContainerStarted","Data":"52c1f9bdb16593064d2f5f5160ae38d23ba015d82d9b7a4cdc6d4b5e499bad67"} Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.064603 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2mh7r" event={"ID":"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300","Type":"ContainerStarted","Data":"d1cf169304733ed9b279f4a210f8873f7a356dd7c4623f31e3fc4d1075634789"} Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.065936 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" event={"ID":"179763d8-8dea-40e5-ba89-1a848fbf519a","Type":"ContainerStarted","Data":"2534e9dd2dd4610ae33d4fb0f2d80f272f56ca1fbc159972eef0d6eb6f76663b"} Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.160021 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.160233 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.660202245 +0000 UTC m=+127.531700297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.160340 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.160880 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.660860731 +0000 UTC m=+127.532358783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.261569 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.261758 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.761733299 +0000 UTC m=+127.633231351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.262146 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.262523 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.762513748 +0000 UTC m=+127.634011800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.363097 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.363314 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.863281474 +0000 UTC m=+127.734779526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.363714 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.364112 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.864097464 +0000 UTC m=+127.735595616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.426925 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.465145 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.465509 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.965477835 +0000 UTC m=+127.836975927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.465751 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.466405 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.966390118 +0000 UTC m=+127.837888200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.567172 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.567380 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.067349189 +0000 UTC m=+127.938847241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.567645 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.568038 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.068020805 +0000 UTC m=+127.939518867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.620409 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.620476 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.668928 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.670737 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.170710258 +0000 UTC m=+128.042208390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.771090 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.771513 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.271493895 +0000 UTC m=+128.142991947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.871899 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.872155 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.372118648 +0000 UTC m=+128.243616710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.872856 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.873284 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.373274866 +0000 UTC m=+128.244772918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.974408 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.974625 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.474594865 +0000 UTC m=+128.346092918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.975011 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.975364 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.475357804 +0000 UTC m=+128.346855856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.075686 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.076017 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.576002407 +0000 UTC m=+128.447500459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.177498 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.177877 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.67786468 +0000 UTC m=+128.549362732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.278724 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.278902 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.778870682 +0000 UTC m=+128.650368734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.280247 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.280606 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.780598754 +0000 UTC m=+128.652096806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.346228 5103 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-mf247 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.346623 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.352595 5103 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-r9ddz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.43:6443/healthz\": dial tcp 10.217.0.43:6443: connect: connection refused" start-of-body= Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.352669 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.43:6443/healthz\": dial tcp 10.217.0.43:6443: connect: connection refused" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.383114 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.383720 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.883702417 +0000 UTC m=+128.755200470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.441721 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" podStartSLOduration=17.441702326 podStartE2EDuration="17.441702326s" podCreationTimestamp="2026-01-30 00:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:18.414082895 +0000 UTC m=+128.285580957" watchObservedRunningTime="2026-01-30 00:12:18.441702326 +0000 UTC m=+128.313200378" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.442774 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2mh7r" podStartSLOduration=17.442767221 podStartE2EDuration="17.442767221s" podCreationTimestamp="2026-01-30 00:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:18.442197318 +0000 UTC m=+128.313695370" watchObservedRunningTime="2026-01-30 00:12:18.442767221 +0000 UTC m=+128.314265273" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.487224 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.490784 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.990766747 +0000 UTC m=+128.862264799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.499483 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.499525 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.499541 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.499557 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.499599 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.500097 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.500381 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.507256 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.507453 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.566393 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" podStartSLOduration=105.566370322 podStartE2EDuration="1m45.566370322s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:18.560554491 +0000 UTC m=+128.432052553" watchObservedRunningTime="2026-01-30 00:12:18.566370322 +0000 UTC m=+128.437868374" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.588817 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.589175 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84904a2b-f796-4f03-be5b-c5e18c1806fe-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.589345 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84904a2b-f796-4f03-be5b-c5e18c1806fe-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.589497 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.089477613 +0000 UTC m=+128.960975665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.622032 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.622127 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.690350 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84904a2b-f796-4f03-be5b-c5e18c1806fe-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.690409 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.690457 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84904a2b-f796-4f03-be5b-c5e18c1806fe-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.690579 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84904a2b-f796-4f03-be5b-c5e18c1806fe-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.690812 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.190799343 +0000 UTC m=+129.062297395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.726094 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84904a2b-f796-4f03-be5b-c5e18c1806fe-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.791699 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.792132 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.292112233 +0000 UTC m=+129.163610285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.828901 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.893237 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.893594 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.393578636 +0000 UTC m=+129.265076698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.996216 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.996993 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.496974246 +0000 UTC m=+129.368472298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.055433 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:12:19 crc kubenswrapper[5103]: W0130 00:12:19.063533 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod84904a2b_f796_4f03_be5b_c5e18c1806fe.slice/crio-d51f4b09576f48504cbc9e34a89612c3d129b4875a15ac23f1e2286ab9442de2 WatchSource:0}: Error finding container d51f4b09576f48504cbc9e34a89612c3d129b4875a15ac23f1e2286ab9442de2: Status 404 returned error can't find the container with id d51f4b09576f48504cbc9e34a89612c3d129b4875a15ac23f1e2286ab9442de2 Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.075634 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"84904a2b-f796-4f03-be5b-c5e18c1806fe","Type":"ContainerStarted","Data":"d51f4b09576f48504cbc9e34a89612c3d129b4875a15ac23f1e2286ab9442de2"} Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.077652 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gdlhx" event={"ID":"cf568a51-0f76-4d77-87d4-136b487786a9","Type":"ContainerStarted","Data":"ff81cdbda237a8e61eb1caba96c0f18f89b9fc8b809a3c054d9433b8fff3fda5"} Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.079665 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" event={"ID":"4b196a79-ecff-4ec8-8338-33436cfd3dcc","Type":"ContainerStarted","Data":"881e23f51e1bcf760414e8b5848ebe10a98e30b208af4036e32809d20558764d"} Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.098768 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.099211 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.599191578 +0000 UTC m=+129.470689630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.171248 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cnbd2"] Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.199790 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.200018 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.699988655 +0000 UTC m=+129.571486707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.200142 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.200709 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.700689922 +0000 UTC m=+129.572187984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.301278 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.301428 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.801397997 +0000 UTC m=+129.672896049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.301830 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.302187 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.802179706 +0000 UTC m=+129.673677758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.403454 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.403680 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.903648539 +0000 UTC m=+129.775146591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.404244 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.404605 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.904597012 +0000 UTC m=+129.776095064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.505680 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.505893 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.00585692 +0000 UTC m=+129.877354972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.506432 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.506824 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.006810444 +0000 UTC m=+129.878308496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.532318 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.548994 5103 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-r9ddz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.43:6443/healthz\": dial tcp 10.217.0.43:6443: connect: connection refused" start-of-body= Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.549094 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.43:6443/healthz\": dial tcp 10.217.0.43:6443: connect: connection refused" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.574822 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" podStartSLOduration=106.574807244 podStartE2EDuration="1m46.574807244s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:19.573113163 +0000 UTC m=+129.444611225" watchObservedRunningTime="2026-01-30 00:12:19.574807244 +0000 UTC m=+129.446305296" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.608200 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.608752 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.108718868 +0000 UTC m=+129.980216920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.622364 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.622442 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.674391 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.674685 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.677275 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.677488 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.689560 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" podStartSLOduration=107.68954327 podStartE2EDuration="1m47.68954327s" podCreationTimestamp="2026-01-30 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:19.600395106 +0000 UTC m=+129.471893158" watchObservedRunningTime="2026-01-30 00:12:19.68954327 +0000 UTC m=+129.561041322" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.712588 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.715371 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.215343746 +0000 UTC m=+130.086841798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.817160 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.818068 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.818104 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.818260 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.318242774 +0000 UTC m=+130.189740826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.920248 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.920316 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.920480 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.920810 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.921196 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.421177433 +0000 UTC m=+130.292675485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.947957 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.973847 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7c7gb"] Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.996901 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.021903 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.022436 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.522420001 +0000 UTC m=+130.393918053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.090339 5103 generic.go:358] "Generic (PLEG): container finished" podID="e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" containerID="9279ef7847ed52983018c21d41a7442c838f79e9c0933fbc64d3162dda65f4ed" exitCode=0 Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.125275 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.125688 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.625671488 +0000 UTC m=+130.497169540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.227185 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.227324 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.727297095 +0000 UTC m=+130.598795147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.227916 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.228247 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.728238148 +0000 UTC m=+130.599736200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.329237 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.329504 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.829473646 +0000 UTC m=+130.700971698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.329975 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.330353 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.830340517 +0000 UTC m=+130.701838569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.431295 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.431477 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.931446241 +0000 UTC m=+130.802944293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.431763 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.432123 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.932113827 +0000 UTC m=+130.803611879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.533685 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.533803 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.033777996 +0000 UTC m=+130.905276048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.534165 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.534699 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.034681147 +0000 UTC m=+130.906179209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.566641 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7c7gb"] Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.566706 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" event={"ID":"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6","Type":"ContainerStarted","Data":"40767d4f61f3f00b189ba8d8331595a1e056ec284136d6b7f5ac8f9ed3c8f3eb"} Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.566753 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nbjkv"] Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.566813 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.572578 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.635301 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.635479 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.135445454 +0000 UTC m=+131.006943506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.635634 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.636157 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.136150251 +0000 UTC m=+131.007648303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.664183 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:20 crc kubenswrapper[5103]: [-]has-synced failed: reason withheld Jan 30 00:12:20 crc kubenswrapper[5103]: [+]process-running ok Jan 30 00:12:20 crc kubenswrapper[5103]: healthz check failed Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.664254 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.736924 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.737129 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.237100062 +0000 UTC m=+131.108598114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.737671 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgxx5\" (UniqueName: \"kubernetes.io/projected/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-kube-api-access-cgxx5\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.737862 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-catalog-content\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.737937 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.738069 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-utilities\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.738221 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.238214189 +0000 UTC m=+131.109712241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.839592 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.839765 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.339741822 +0000 UTC m=+131.211239874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.839992 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cgxx5\" (UniqueName: \"kubernetes.io/projected/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-kube-api-access-cgxx5\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.840128 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-catalog-content\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.840180 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.840477 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.34046766 +0000 UTC m=+131.211965712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.840525 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-utilities\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.840704 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-catalog-content\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.840937 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-utilities\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.863958 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgxx5\" (UniqueName: \"kubernetes.io/projected/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-kube-api-access-cgxx5\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.883069 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.927479 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" gracePeriod=30 Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.944240 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.944371 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.444355102 +0000 UTC m=+131.315853154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.944693 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.944960 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.444953667 +0000 UTC m=+131.316451719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.029970 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gdlhx" podStartSLOduration=21.02994748 podStartE2EDuration="21.02994748s" podCreationTimestamp="2026-01-30 00:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:20.983374779 +0000 UTC m=+130.854872851" watchObservedRunningTime="2026-01-30 00:12:21.02994748 +0000 UTC m=+130.901445532" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.031077 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" podStartSLOduration=108.031069827 podStartE2EDuration="1m48.031069827s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:21.028623548 +0000 UTC m=+130.900121620" watchObservedRunningTime="2026-01-30 00:12:21.031069827 +0000 UTC m=+130.902567879" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.046212 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.046384 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.546326308 +0000 UTC m=+131.417824360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.046872 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.047493 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.547472096 +0000 UTC m=+131.418970148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.113093 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbjkv"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.116896 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.119890 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.139193 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.139509 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" event={"ID":"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06","Type":"ContainerDied","Data":"9279ef7847ed52983018c21d41a7442c838f79e9c0933fbc64d3162dda65f4ed"} Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.139536 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ea21664f-12f0-4c35-bcb0-2f3b355f9153","Type":"ContainerStarted","Data":"1f97cdac6963b6b6cb50799044e9ac18b855d2c1635c1f04b350e48382eb7d0f"} Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.139548 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vzx54"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.149384 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.149854 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.64982686 +0000 UTC m=+131.521324912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.252509 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-catalog-content\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.252777 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-utilities\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.252860 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.253103 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdntz\" (UniqueName: \"kubernetes.io/projected/9807e5f5-fa63-4e0c-9b52-3c0044337c40-kube-api-access-zdntz\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.253686 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.753669991 +0000 UTC m=+131.625168043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: W0130 00:12:21.323417 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebb7f7db_c773_49f6_b58b_6bd929f25f3a.slice/crio-b5cac0fe83167992a8ae22830c4af1a52661a8e624e0749533087d96d73359ba WatchSource:0}: Error finding container b5cac0fe83167992a8ae22830c4af1a52661a8e624e0749533087d96d73359ba: Status 404 returned error can't find the container with id b5cac0fe83167992a8ae22830c4af1a52661a8e624e0749533087d96d73359ba Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.354127 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.354332 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.854305625 +0000 UTC m=+131.725803677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.354704 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zdntz\" (UniqueName: \"kubernetes.io/projected/9807e5f5-fa63-4e0c-9b52-3c0044337c40-kube-api-access-zdntz\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.354900 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-catalog-content\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.354971 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-utilities\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.355010 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.355379 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.855370021 +0000 UTC m=+131.726868073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.356169 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-catalog-content\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.356308 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-utilities\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.387228 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdntz\" (UniqueName: \"kubernetes.io/projected/9807e5f5-fa63-4e0c-9b52-3c0044337c40-kube-api-access-zdntz\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.449970 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.455958 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.456130 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.956101226 +0000 UTC m=+131.827599278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.456336 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.456681 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.95666659 +0000 UTC m=+131.828164642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.468377 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.468425 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"84904a2b-f796-4f03-be5b-c5e18c1806fe","Type":"ContainerStarted","Data":"24349bed06372dbea664953971f2bfbebc29c4bd99a219453ea1bb72d5709b02"} Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.468450 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzx54"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.468614 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qj2cx"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.468676 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.494605 5103 patch_prober.go:28] interesting pod/apiserver-8596bd845d-6z46s container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.494668 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" podUID="a0ff7eb1-7b00-4318-936e-30862acd97e5" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.557842 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.558117 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-utilities\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.558436 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-catalog-content\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.558568 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.058447571 +0000 UTC m=+131.929945633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.558775 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftd25\" (UniqueName: \"kubernetes.io/projected/faf9931f-40f0-4d66-b375-89bec91fd6b8-kube-api-access-ftd25\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.558983 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.559500 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.059489736 +0000 UTC m=+131.930987888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.627587 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:21 crc kubenswrapper[5103]: [-]has-synced failed: reason withheld Jan 30 00:12:21 crc kubenswrapper[5103]: [+]process-running ok Jan 30 00:12:21 crc kubenswrapper[5103]: healthz check failed Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.628177 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.660747 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.661263 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-utilities\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.661357 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-catalog-content\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.661398 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ftd25\" (UniqueName: \"kubernetes.io/projected/faf9931f-40f0-4d66-b375-89bec91fd6b8-kube-api-access-ftd25\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.661586 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.161553754 +0000 UTC m=+132.033051806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.661799 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-utilities\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.662288 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-catalog-content\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.707609 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftd25\" (UniqueName: \"kubernetes.io/projected/faf9931f-40f0-4d66-b375-89bec91fd6b8-kube-api-access-ftd25\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.763002 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.763372 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.263360245 +0000 UTC m=+132.134858297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.815375 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.854037 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.854124 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qj2cx"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.854182 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7c7gb"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.854211 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbjkv"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.854359 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.865786 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.866097 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.366079369 +0000 UTC m=+132.237577421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.967568 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-utilities\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.967609 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rhcn\" (UniqueName: \"kubernetes.io/projected/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-kube-api-access-2rhcn\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.967644 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-catalog-content\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.967687 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.967994 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.467981553 +0000 UTC m=+132.339479605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.968798 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z59s8"] Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.068666 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.068908 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-utilities\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.068936 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2rhcn\" (UniqueName: \"kubernetes.io/projected/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-kube-api-access-2rhcn\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.068967 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-catalog-content\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.069426 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.569396605 +0000 UTC m=+132.440894657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.069621 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-catalog-content\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.069864 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-utilities\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.095870 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rhcn\" (UniqueName: \"kubernetes.io/projected/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-kube-api-access-2rhcn\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: W0130 00:12:22.116684 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaf9931f_40f0_4d66_b375_89bec91fd6b8.slice/crio-efd1833ce5dba5fe0d8d29ba5d602d25ca88ad5bac471d3550f1eabc547727e3 WatchSource:0}: Error finding container efd1833ce5dba5fe0d8d29ba5d602d25ca88ad5bac471d3550f1eabc547727e3: Status 404 returned error can't find the container with id efd1833ce5dba5fe0d8d29ba5d602d25ca88ad5bac471d3550f1eabc547727e3 Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.140347 5103 generic.go:358] "Generic (PLEG): container finished" podID="84904a2b-f796-4f03-be5b-c5e18c1806fe" containerID="24349bed06372dbea664953971f2bfbebc29c4bd99a219453ea1bb72d5709b02" exitCode=0 Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.156594 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.156628 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59s8"] Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.156643 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"84904a2b-f796-4f03-be5b-c5e18c1806fe","Type":"ContainerDied","Data":"24349bed06372dbea664953971f2bfbebc29c4bd99a219453ea1bb72d5709b02"} Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.156666 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzx54"] Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.156849 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.161677 5103 ???:1] "http: TLS handshake error from 192.168.126.11:51358: no serving certificate available for the kubelet" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.163724 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.170696 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.172180 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.67216284 +0000 UTC m=+132.543660892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.176877 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c7gb" event={"ID":"ebb7f7db-c773-49f6-b58b-6bd929f25f3a","Type":"ContainerStarted","Data":"b5cac0fe83167992a8ae22830c4af1a52661a8e624e0749533087d96d73359ba"} Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.192196 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbjkv" event={"ID":"9807e5f5-fa63-4e0c-9b52-3c0044337c40","Type":"ContainerStarted","Data":"61764b58f50ceebb2c7b19c23cfca937d7976fd5804c25d5eefbebe83ee09940"} Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.197627 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ea21664f-12f0-4c35-bcb0-2f3b355f9153","Type":"ContainerStarted","Data":"0749980a450b373b44edee6b048d31d5b6409df51186bebecc0a106cf78c36cb"} Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.273918 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.274573 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-catalog-content\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.274607 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-utilities\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.274739 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gwr8\" (UniqueName: \"kubernetes.io/projected/c312b248-250c-4b33-9c7a-f79c1e73a75b-kube-api-access-4gwr8\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.274972 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.774952495 +0000 UTC m=+132.646450547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.288608 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.375962 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-catalog-content\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.376377 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-utilities\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.376443 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.376482 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4gwr8\" (UniqueName: \"kubernetes.io/projected/c312b248-250c-4b33-9c7a-f79c1e73a75b-kube-api-access-4gwr8\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.377402 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.877383002 +0000 UTC m=+132.748881054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.377667 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-utilities\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.378182 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-catalog-content\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.379088 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xpqb7"] Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.401016 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gwr8\" (UniqueName: \"kubernetes.io/projected/c312b248-250c-4b33-9c7a-f79c1e73a75b-kube-api-access-4gwr8\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.478034 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.478297 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.478400 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.978377164 +0000 UTC m=+132.849875206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.478641 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.479014 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.979005839 +0000 UTC m=+132.850503891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.531534 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.565670 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=3.565647503 podStartE2EDuration="3.565647503s" podCreationTimestamp="2026-01-30 00:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:22.553461587 +0000 UTC m=+132.424959639" watchObservedRunningTime="2026-01-30 00:12:22.565647503 +0000 UTC m=+132.437145555" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.579722 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.580187 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.080168135 +0000 UTC m=+132.951666187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.625332 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:22 crc kubenswrapper[5103]: [-]has-synced failed: reason withheld Jan 30 00:12:22 crc kubenswrapper[5103]: [+]process-running ok Jan 30 00:12:22 crc kubenswrapper[5103]: healthz check failed Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.625722 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.681174 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smxdw\" (UniqueName: \"kubernetes.io/projected/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-kube-api-access-smxdw\") pod \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.681303 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-config-volume\") pod \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.681337 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-secret-volume\") pod \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.681670 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.684423 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-config-volume" (OuterVolumeSpecName: "config-volume") pod "e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" (UID: "e33f92bd-42f6-4e7b-8176-6ab1c33e6c06"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.685670 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.185642096 +0000 UTC m=+133.057140368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.691974 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-kube-api-access-smxdw" (OuterVolumeSpecName: "kube-api-access-smxdw") pod "e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" (UID: "e33f92bd-42f6-4e7b-8176-6ab1c33e6c06"). InnerVolumeSpecName "kube-api-access-smxdw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.701254 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" (UID: "e33f92bd-42f6-4e7b-8176-6ab1c33e6c06"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.776827 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.776922 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.783730 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.784402 5103 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.784432 5103 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.784445 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-smxdw\" (UniqueName: \"kubernetes.io/projected/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-kube-api-access-smxdw\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.784547 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.284520116 +0000 UTC m=+133.156018168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: W0130 00:12:22.834272 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc312b248_250c_4b33_9c7a_f79c1e73a75b.slice/crio-e00554ee3ee9141178c8c93a9b221de3559a21de326b50788319212bb34c00ff WatchSource:0}: Error finding container e00554ee3ee9141178c8c93a9b221de3559a21de326b50788319212bb34c00ff: Status 404 returned error can't find the container with id e00554ee3ee9141178c8c93a9b221de3559a21de326b50788319212bb34c00ff Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.886509 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.887183 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.387160063 +0000 UTC m=+133.258658125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.988591 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.988990 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.488970189 +0000 UTC m=+133.360468241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.090718 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.091376 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.59136364 +0000 UTC m=+133.462861692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.192980 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.193326 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.693282868 +0000 UTC m=+133.564780960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.193899 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.194631 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.694611211 +0000 UTC m=+133.566109293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.213669 5103 generic.go:358] "Generic (PLEG): container finished" podID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerID="2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153" exitCode=0 Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.295756 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.295896 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.795876404 +0000 UTC m=+133.667374456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.296086 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.296389 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.796381406 +0000 UTC m=+133.667879458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.301621 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.301728 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.397249 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.397487 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.897453274 +0000 UTC m=+133.768951326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.398095 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.398532 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.8985149 +0000 UTC m=+133.770012962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.499333 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.499535 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.999496756 +0000 UTC m=+133.870994808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.499978 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.500366 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.000351627 +0000 UTC m=+133.871849679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.586108 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpqb7"] Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.586411 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.586884 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.588083 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602315 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" event={"ID":"fe0b1692-3dd7-4854-b53d-c32cd8162e1b","Type":"ContainerStarted","Data":"c389a8c50911f0500e80c3994452e04998e51f35893361879d7ec4d4c0c6337f"} Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602358 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qj2cx"] Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602378 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c7gb" event={"ID":"ebb7f7db-c773-49f6-b58b-6bd929f25f3a","Type":"ContainerDied","Data":"2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153"} Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602397 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59s8"] Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602411 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj2cx" event={"ID":"3ce63351-9fca-4e0e-b4fb-3032a983ebcc","Type":"ContainerStarted","Data":"d31bb6a2f9fb799d1f7776dc6dbb0a5dcdd009e2858db6301a056354672735ba"} Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602422 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" event={"ID":"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06","Type":"ContainerDied","Data":"22cf5ca5b9dc2b7338a29b6c0ecec87eac0aa4aac8490606aa762bcf17a7311c"} Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602437 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22cf5ca5b9dc2b7338a29b6c0ecec87eac0aa4aac8490606aa762bcf17a7311c" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602475 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzx54" event={"ID":"faf9931f-40f0-4d66-b375-89bec91fd6b8","Type":"ContainerStarted","Data":"efd1833ce5dba5fe0d8d29ba5d602d25ca88ad5bac471d3550f1eabc547727e3"} Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602487 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59s8" event={"ID":"c312b248-250c-4b33-9c7a-f79c1e73a75b","Type":"ContainerStarted","Data":"e00554ee3ee9141178c8c93a9b221de3559a21de326b50788319212bb34c00ff"} Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602501 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2rjzw"] Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.603483 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" containerName="collect-profiles" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.603504 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" containerName="collect-profiles" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.603621 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" containerName="collect-profiles" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.603814 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.603946 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.103922826 +0000 UTC m=+133.975420898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.604301 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.604632 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.104622103 +0000 UTC m=+133.976120165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.623534 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:23 crc kubenswrapper[5103]: [-]has-synced failed: reason withheld Jan 30 00:12:23 crc kubenswrapper[5103]: [+]process-running ok Jan 30 00:12:23 crc kubenswrapper[5103]: healthz check failed Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.623615 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.706251 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.706473 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f77zp\" (UniqueName: \"kubernetes.io/projected/3d4d4fce-00ed-4163-8a52-864aa4d324e6-kube-api-access-f77zp\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.706531 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-catalog-content\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.706564 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-utilities\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.706689 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.206674125 +0000 UTC m=+134.078172177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.807727 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-catalog-content\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.807774 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-utilities\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.807921 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.807974 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f77zp\" (UniqueName: \"kubernetes.io/projected/3d4d4fce-00ed-4163-8a52-864aa4d324e6-kube-api-access-f77zp\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.808290 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-catalog-content\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.808349 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-utilities\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.808351 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.308330998 +0000 UTC m=+134.179829060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.815707 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.861894 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f77zp\" (UniqueName: \"kubernetes.io/projected/3d4d4fce-00ed-4163-8a52-864aa4d324e6-kube-api-access-f77zp\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.910664 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.910743 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.410717228 +0000 UTC m=+134.282215290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.910984 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84904a2b-f796-4f03-be5b-c5e18c1806fe-kubelet-dir\") pod \"84904a2b-f796-4f03-be5b-c5e18c1806fe\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.911142 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84904a2b-f796-4f03-be5b-c5e18c1806fe-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "84904a2b-f796-4f03-be5b-c5e18c1806fe" (UID: "84904a2b-f796-4f03-be5b-c5e18c1806fe"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.911150 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84904a2b-f796-4f03-be5b-c5e18c1806fe-kube-api-access\") pod \"84904a2b-f796-4f03-be5b-c5e18c1806fe\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.911343 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.911862 5103 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84904a2b-f796-4f03-be5b-c5e18c1806fe-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.912183 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.412155423 +0000 UTC m=+134.283653475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.917940 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84904a2b-f796-4f03-be5b-c5e18c1806fe-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "84904a2b-f796-4f03-be5b-c5e18c1806fe" (UID: "84904a2b-f796-4f03-be5b-c5e18c1806fe"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.935714 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.012932 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.013530 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84904a2b-f796-4f03-be5b-c5e18c1806fe-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.014905 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.514884131 +0000 UTC m=+134.386382183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.072715 5103 patch_prober.go:28] interesting pod/console-64d44f6ddf-7v6vx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.072779 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-7v6vx" podUID="9bef77c6-141b-4cff-a91d-7515860a6a2a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.115008 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.115607 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.61556858 +0000 UTC m=+134.487066632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: W0130 00:12:24.148983 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d4d4fce_00ed_4163_8a52_864aa4d324e6.slice/crio-7644a5d832213c30e73c7160330a4d5f9a395115e8e0a49061670b16a87be474 WatchSource:0}: Error finding container 7644a5d832213c30e73c7160330a4d5f9a395115e8e0a49061670b16a87be474: Status 404 returned error can't find the container with id 7644a5d832213c30e73c7160330a4d5f9a395115e8e0a49061670b16a87be474 Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.217037 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.217392 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.717368196 +0000 UTC m=+134.588866248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.217656 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.218214 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.718182866 +0000 UTC m=+134.589680908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.225374 5103 generic.go:358] "Generic (PLEG): container finished" podID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerID="b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f" exitCode=0 Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.227254 5103 generic.go:358] "Generic (PLEG): container finished" podID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerID="9b8ee9cc3437496d869aca397a52ca77f07188d54f568012703f601a70efc9d2" exitCode=0 Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.229256 5103 generic.go:358] "Generic (PLEG): container finished" podID="ea21664f-12f0-4c35-bcb0-2f3b355f9153" containerID="0749980a450b373b44edee6b048d31d5b6409df51186bebecc0a106cf78c36cb" exitCode=0 Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.318790 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.319174 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.819125861 +0000 UTC m=+134.690623933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.319584 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.320088 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.820066734 +0000 UTC m=+134.691564776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.420589 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.420792 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.920753442 +0000 UTC m=+134.792251504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.421568 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.421922 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.92190787 +0000 UTC m=+134.793405922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.523207 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.523458 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.023394068 +0000 UTC m=+134.894892120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.524131 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.524753 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.02472152 +0000 UTC m=+134.896219612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.624928 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:24 crc kubenswrapper[5103]: [+]has-synced ok Jan 30 00:12:24 crc kubenswrapper[5103]: [+]process-running ok Jan 30 00:12:24 crc kubenswrapper[5103]: healthz check failed Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.625028 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.625550 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.625732 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.125700827 +0000 UTC m=+134.997198919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.626153 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.626599 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.126575198 +0000 UTC m=+134.998073260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.701622 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2rjzw"] Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.701695 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bhpd7"] Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.702318 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.702509 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.702993 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="84904a2b-f796-4f03-be5b-c5e18c1806fe" containerName="pruner" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.703017 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="84904a2b-f796-4f03-be5b-c5e18c1806fe" containerName="pruner" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.703176 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="84904a2b-f796-4f03-be5b-c5e18c1806fe" containerName="pruner" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.710618 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.754206 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.754438 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-catalog-content\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.754490 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-utilities\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.754609 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtd7x\" (UniqueName: \"kubernetes.io/projected/6c3bfb26-42f9-43f4-8126-b941aea6ecca-kube-api-access-qtd7x\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.757319 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.257292067 +0000 UTC m=+135.128790129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.855467 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-catalog-content\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.855513 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-utilities\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.855588 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.855608 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qtd7x\" (UniqueName: \"kubernetes.io/projected/6c3bfb26-42f9-43f4-8126-b941aea6ecca-kube-api-access-qtd7x\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.856289 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.356275745 +0000 UTC m=+135.227773797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.856667 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-utilities\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.856717 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-catalog-content\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.899309 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtd7x\" (UniqueName: \"kubernetes.io/projected/6c3bfb26-42f9-43f4-8126-b941aea6ecca-kube-api-access-qtd7x\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.956666 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.957389 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.457357963 +0000 UTC m=+135.328856045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.058737 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.059125 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.559107578 +0000 UTC m=+135.430605640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.062108 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.160208 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.160373 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.66033712 +0000 UTC m=+135.531835172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.160711 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.161232 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.661208481 +0000 UTC m=+135.532706533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.261685 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.262043 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.762022873 +0000 UTC m=+135.633520925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: W0130 00:12:25.339144 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3bfb26_42f9_43f4_8126_b941aea6ecca.slice/crio-06c99f794d6099db2b3382cfe3ae52362055fdf833d1abdcf54ef653697a4f26 WatchSource:0}: Error finding container 06c99f794d6099db2b3382cfe3ae52362055fdf833d1abdcf54ef653697a4f26: Status 404 returned error can't find the container with id 06c99f794d6099db2b3382cfe3ae52362055fdf833d1abdcf54ef653697a4f26 Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.363766 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.364284 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.86426563 +0000 UTC m=+135.735763682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.465384 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.465547 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.965520203 +0000 UTC m=+135.837018255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.466035 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.466374 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.966358633 +0000 UTC m=+135.837856685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.566834 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.567077 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.067032122 +0000 UTC m=+135.938530174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.567200 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.567870 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.067829101 +0000 UTC m=+135.939327153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.668253 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.668379 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.168355036 +0000 UTC m=+136.039853088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.668853 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.669205 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.169196816 +0000 UTC m=+136.040694868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.725372 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bhpd7"] Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.725436 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbjkv" event={"ID":"9807e5f5-fa63-4e0c-9b52-3c0044337c40","Type":"ContainerDied","Data":"b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f"} Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.725634 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.726323 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736322 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzx54" event={"ID":"faf9931f-40f0-4d66-b375-89bec91fd6b8","Type":"ContainerDied","Data":"9b8ee9cc3437496d869aca397a52ca77f07188d54f568012703f601a70efc9d2"} Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736410 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736435 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736501 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736512 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ea21664f-12f0-4c35-bcb0-2f3b355f9153","Type":"ContainerDied","Data":"0749980a450b373b44edee6b048d31d5b6409df51186bebecc0a106cf78c36cb"} Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736530 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpqb7"] Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736543 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerStarted","Data":"7644a5d832213c30e73c7160330a4d5f9a395115e8e0a49061670b16a87be474"} Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736555 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"84904a2b-f796-4f03-be5b-c5e18c1806fe","Type":"ContainerDied","Data":"d51f4b09576f48504cbc9e34a89612c3d129b4875a15ac23f1e2286ab9442de2"} Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736566 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d51f4b09576f48504cbc9e34a89612c3d129b4875a15ac23f1e2286ab9442de2" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736583 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2rjzw"] Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.769642 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.769860 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.269814034 +0000 UTC m=+136.141312086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.770316 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-catalog-content\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.770384 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nnsl\" (UniqueName: \"kubernetes.io/projected/096edab0-9031-4bcd-8451-a93417372ee1-kube-api-access-6nnsl\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.770413 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-utilities\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.770439 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.771018 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.271011553 +0000 UTC m=+136.142509605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.826762 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.828942 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.871731 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.871992 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.371959338 +0000 UTC m=+136.243457400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.872868 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-catalog-content\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.873029 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6nnsl\" (UniqueName: \"kubernetes.io/projected/096edab0-9031-4bcd-8451-a93417372ee1-kube-api-access-6nnsl\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.873144 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-utilities\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.873237 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.875242 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-utilities\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.876813 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-catalog-content\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.878351 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.378318833 +0000 UTC m=+136.249817055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.910220 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nnsl\" (UniqueName: \"kubernetes.io/projected/096edab0-9031-4bcd-8451-a93417372ee1-kube-api-access-6nnsl\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.975280 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.975540 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.475488826 +0000 UTC m=+136.346986878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.976333 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.976719 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.476702036 +0000 UTC m=+136.348200308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.048536 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.078113 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.078606 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.578590034 +0000 UTC m=+136.450088086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.182805 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.183344 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.683330121 +0000 UTC m=+136.554828173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.274524 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerStarted","Data":"06c99f794d6099db2b3382cfe3ae52362055fdf833d1abdcf54ef653697a4f26"} Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.279530 5103 generic.go:358] "Generic (PLEG): container finished" podID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerID="8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35" exitCode=0 Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.279664 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59s8" event={"ID":"c312b248-250c-4b33-9c7a-f79c1e73a75b","Type":"ContainerDied","Data":"8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35"} Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.299663 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.300558 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.800530572 +0000 UTC m=+136.672028624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.300686 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.302150 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.802141821 +0000 UTC m=+136.673639873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.304415 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerStarted","Data":"487aacb9ba75fd28f520f3d4a32a82a1b33516035610efccfd2d8baacd805ff1"} Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.311839 5103 generic.go:358] "Generic (PLEG): container finished" podID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerID="650d7faa5f4f892e52058d54951c121f7eb03b49005bdf02d2d0dcbf11476748" exitCode=0 Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.312104 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj2cx" event={"ID":"3ce63351-9fca-4e0e-b4fb-3032a983ebcc","Type":"ContainerDied","Data":"650d7faa5f4f892e52058d54951c121f7eb03b49005bdf02d2d0dcbf11476748"} Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.405311 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.405900 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.905875864 +0000 UTC m=+136.777373916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.406033 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.407552 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.907540954 +0000 UTC m=+136.779039006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.413054 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.420094 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.420554 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.420845 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.501848 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.507595 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.510143 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.010106569 +0000 UTC m=+136.881604621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.513503 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.564493 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bhpd7"] Jan 30 00:12:26 crc kubenswrapper[5103]: W0130 00:12:26.593803 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod096edab0_9031_4bcd_8451_a93417372ee1.slice/crio-d4066f004b894aa275e8f17ab459177453c90a0d285ce8521d3e860edb7bf0cf WatchSource:0}: Error finding container d4066f004b894aa275e8f17ab459177453c90a0d285ce8521d3e860edb7bf0cf: Status 404 returned error can't find the container with id d4066f004b894aa275e8f17ab459177453c90a0d285ce8521d3e860edb7bf0cf Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.614174 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.614613 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.11460083 +0000 UTC m=+136.986098882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.621068 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.715118 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kubelet-dir\") pod \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.715223 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kube-api-access\") pod \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.715285 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ea21664f-12f0-4c35-bcb0-2f3b355f9153" (UID: "ea21664f-12f0-4c35-bcb0-2f3b355f9153"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.715383 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.715648 5103 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.716737 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.216696064 +0000 UTC m=+137.088194116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.722532 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ea21664f-12f0-4c35-bcb0-2f3b355f9153" (UID: "ea21664f-12f0-4c35-bcb0-2f3b355f9153"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.817177 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.817800 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.317782732 +0000 UTC m=+137.189280784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.817862 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.918696 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.918872 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.41884621 +0000 UTC m=+137.290344252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.919281 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.919629 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.419617049 +0000 UTC m=+137.291115101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.021014 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.021187 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.521158239 +0000 UTC m=+137.392656281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.021858 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.022338 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.522315367 +0000 UTC m=+137.393813469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.122658 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.122820 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.622788791 +0000 UTC m=+137.494286853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.123238 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.123600 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.62358361 +0000 UTC m=+137.495081812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.224298 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.224513 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.724481434 +0000 UTC m=+137.595979486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.224784 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.225077 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.725064218 +0000 UTC m=+137.596562270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.323624 5103 generic.go:358] "Generic (PLEG): container finished" podID="096edab0-9031-4bcd-8451-a93417372ee1" containerID="e6a329d39509762784caccc32b4323411f00c0a9bfd035635c251413ddb2d332" exitCode=0 Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.323751 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerDied","Data":"e6a329d39509762784caccc32b4323411f00c0a9bfd035635c251413ddb2d332"} Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.323787 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerStarted","Data":"d4066f004b894aa275e8f17ab459177453c90a0d285ce8521d3e860edb7bf0cf"} Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.326674 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ea21664f-12f0-4c35-bcb0-2f3b355f9153","Type":"ContainerDied","Data":"1f97cdac6963b6b6cb50799044e9ac18b855d2c1635c1f04b350e48382eb7d0f"} Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.326694 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.326696 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f97cdac6963b6b6cb50799044e9ac18b855d2c1635c1f04b350e48382eb7d0f" Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.328014 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.328232 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.828208827 +0000 UTC m=+137.699706889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.328366 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.329679 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.829666952 +0000 UTC m=+137.701165014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.332642 5103 generic.go:358] "Generic (PLEG): container finished" podID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerID="fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f" exitCode=0 Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.332741 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerDied","Data":"fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f"} Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.335362 5103 generic.go:358] "Generic (PLEG): container finished" podID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerID="487aacb9ba75fd28f520f3d4a32a82a1b33516035610efccfd2d8baacd805ff1" exitCode=0 Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.335856 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerDied","Data":"487aacb9ba75fd28f520f3d4a32a82a1b33516035610efccfd2d8baacd805ff1"} Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.431105 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.431687 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.931595861 +0000 UTC m=+137.803093933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.432214 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.434651 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.934627115 +0000 UTC m=+137.806125167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.534821 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.535905 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.035869887 +0000 UTC m=+137.907367979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.637391 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.637846 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.137829487 +0000 UTC m=+138.009327529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.740706 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.740987 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.240910794 +0000 UTC m=+138.112408846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.741626 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.742064 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.242041612 +0000 UTC m=+138.113539664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.843647 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.843894 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.343849188 +0000 UTC m=+138.215347240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.844387 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.844852 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.344844062 +0000 UTC m=+138.216342114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.946240 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.946485 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.446427473 +0000 UTC m=+138.317925535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.946763 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.947272 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.447249193 +0000 UTC m=+138.318747245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.048034 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.048286 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.548257929 +0000 UTC m=+138.419755981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.048693 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.049087 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.549042178 +0000 UTC m=+138.420540230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.150317 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.150575 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.650535386 +0000 UTC m=+138.522033448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.151111 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.151590 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.651572482 +0000 UTC m=+138.523070534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.252605 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.252911 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.752855285 +0000 UTC m=+138.624353347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.253258 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.253703 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.753680185 +0000 UTC m=+138.625178237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.349017 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.351436 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.353385 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.355528 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.355717 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.855686666 +0000 UTC m=+138.727184708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.357787 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.360812 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.360897 5103 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.370160 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.870135807 +0000 UTC m=+138.741633859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.462403 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.462664 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.962633787 +0000 UTC m=+138.834131839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.462880 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.463351 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.963333124 +0000 UTC m=+138.834831286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.564495 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.564708 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.064665309 +0000 UTC m=+138.936163371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.565123 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.565534 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.065516929 +0000 UTC m=+138.937014991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.669489 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.670079 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.170039652 +0000 UTC m=+139.041537704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.771272 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.771610 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.271598222 +0000 UTC m=+139.143096274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.871958 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.872133 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.372104886 +0000 UTC m=+139.243602948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.872658 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.873029 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.373013428 +0000 UTC m=+139.244511480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.974301 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.974657 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.474596799 +0000 UTC m=+139.346094851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.975085 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.975482 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.47546247 +0000 UTC m=+139.346960722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.076114 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.076279 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.576244751 +0000 UTC m=+139.447742803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.076708 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.077101 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.577042161 +0000 UTC m=+139.448540213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.178216 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.178573 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.678511599 +0000 UTC m=+139.550009651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.178873 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.179297 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.679277637 +0000 UTC m=+139.550775829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.280223 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.280436 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.780392867 +0000 UTC m=+139.651891049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.281371 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.281822 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.781808881 +0000 UTC m=+139.653306933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.385842 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.386074 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.886017896 +0000 UTC m=+139.757515948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.386952 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.387437 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.887412319 +0000 UTC m=+139.758910551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.489060 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.489362 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.989314258 +0000 UTC m=+139.860812310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.489578 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.490040 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.990018555 +0000 UTC m=+139.861516597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.555647 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.595845 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.599179 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.099127279 +0000 UTC m=+139.970625331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.599568 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.600136 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.100111433 +0000 UTC m=+139.971609485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.701077 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.701394 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.201353255 +0000 UTC m=+140.072851317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.701535 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.702537 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.202523114 +0000 UTC m=+140.074021366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.803241 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.803731 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.303711005 +0000 UTC m=+140.175209057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.905313 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.905984 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.405961592 +0000 UTC m=+140.277459644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.006445 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.006729 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.506673901 +0000 UTC m=+140.378171953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.007030 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.007539 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.507505831 +0000 UTC m=+140.379003883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.108216 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.108653 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.608637161 +0000 UTC m=+140.480135203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.210010 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.210517 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.710445257 +0000 UTC m=+140.581943309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.310981 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.311310 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.81128114 +0000 UTC m=+140.682779192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.412525 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.412961 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.912939882 +0000 UTC m=+140.784437934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.513937 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.514227 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.014191175 +0000 UTC m=+140.885689227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.514669 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.515011 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.015002285 +0000 UTC m=+140.886500337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.616258 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.616410 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.116387821 +0000 UTC m=+140.987885883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.616508 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.616629 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.618817 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.618837 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.633763 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.634677 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.667870 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.717716 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.717819 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.717841 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.718178 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.218156516 +0000 UTC m=+141.089654578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.721541 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.729448 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.745297 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.745675 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.819338 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.819512 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.31948262 +0000 UTC m=+141.190980672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.820010 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.820534 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.320515555 +0000 UTC m=+141.192013607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.921218 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.921396 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.421370258 +0000 UTC m=+141.292868310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.921643 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.922311 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.42227249 +0000 UTC m=+141.293770552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.932898 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.942288 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.022390 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.022542 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.522518188 +0000 UTC m=+141.394016240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.022697 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.023079 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.523038291 +0000 UTC m=+141.394536343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.124516 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.124699 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.624671253 +0000 UTC m=+141.496169295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.124926 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.125320 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.625303568 +0000 UTC m=+141.496801620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.226391 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.226532 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.72651271 +0000 UTC m=+141.598010762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.226698 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.226973 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.726962361 +0000 UTC m=+141.598460413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.328090 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.82806121 +0000 UTC m=+141.699559272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.328099 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.328580 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.328909 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.82889811 +0000 UTC m=+141.700396162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.386595 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" event={"ID":"fe0b1692-3dd7-4854-b53d-c32cd8162e1b","Type":"ContainerStarted","Data":"d6c29e6a0e420d4cd531b73b85ba8abd78aeceb53c509110c477fb6b2fad95e9"} Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.430558 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.430981 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.930951372 +0000 UTC m=+141.802449424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.532769 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.533146 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.033129707 +0000 UTC m=+141.904627759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.635519 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.636389 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.136350447 +0000 UTC m=+142.007848489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.640385 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.641239 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.141217955 +0000 UTC m=+142.012716007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.703825 5103 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.741564 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.742426 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.242380266 +0000 UTC m=+142.113878438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.843360 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.843413 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.843774 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.343761992 +0000 UTC m=+142.215260044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.845884 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.859573 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.944502 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.944780 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.444741528 +0000 UTC m=+142.316239590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.945121 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.945633 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.445617439 +0000 UTC m=+142.317115491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.046323 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:32 crc kubenswrapper[5103]: E0130 00:12:32.046483 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.546462062 +0000 UTC m=+142.417960114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.074783 5103 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T00:12:31.703854569Z","UUID":"c3e61ce8-6247-4f60-95a2-118b5bac39b0","Handler":null,"Name":"","Endpoint":""} Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.079329 5103 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.079360 5103 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.111116 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.112185 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.147532 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.164766 5103 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.164827 5103 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.205347 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.248718 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.254643 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.320354 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.328808 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.424675 5103 ???:1] "http: TLS handshake error from 192.168.126.11:58636: no serving certificate available for the kubelet" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.775396 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.775463 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.775509 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.776029 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"cf440fc95fced9c1dec5f756ce0700f4d01d4bcefdae5034ff9f16546ffccb75"} pod="openshift-console/downloads-747b44746d-j77tr" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.776097 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" containerID="cri-o://cf440fc95fced9c1dec5f756ce0700f4d01d4bcefdae5034ff9f16546ffccb75" gracePeriod=2 Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.776512 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.776670 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.877854 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 30 00:12:34 crc kubenswrapper[5103]: I0130 00:12:34.068747 5103 patch_prober.go:28] interesting pod/console-64d44f6ddf-7v6vx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 30 00:12:34 crc kubenswrapper[5103]: I0130 00:12:34.068847 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-7v6vx" podUID="9bef77c6-141b-4cff-a91d-7515860a6a2a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 30 00:12:34 crc kubenswrapper[5103]: I0130 00:12:34.406359 5103 generic.go:358] "Generic (PLEG): container finished" podID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerID="cf440fc95fced9c1dec5f756ce0700f4d01d4bcefdae5034ff9f16546ffccb75" exitCode=0 Jan 30 00:12:34 crc kubenswrapper[5103]: I0130 00:12:34.406456 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerDied","Data":"cf440fc95fced9c1dec5f756ce0700f4d01d4bcefdae5034ff9f16546ffccb75"} Jan 30 00:12:38 crc kubenswrapper[5103]: E0130 00:12:38.351759 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:38 crc kubenswrapper[5103]: E0130 00:12:38.355016 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:38 crc kubenswrapper[5103]: E0130 00:12:38.357311 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:38 crc kubenswrapper[5103]: E0130 00:12:38.357357 5103 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:42 crc kubenswrapper[5103]: I0130 00:12:42.778703 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:42 crc kubenswrapper[5103]: I0130 00:12:42.779668 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:44 crc kubenswrapper[5103]: I0130 00:12:44.105829 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:44 crc kubenswrapper[5103]: I0130 00:12:44.114673 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:48 crc kubenswrapper[5103]: E0130 00:12:48.350286 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:48 crc kubenswrapper[5103]: E0130 00:12:48.351917 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:48 crc kubenswrapper[5103]: E0130 00:12:48.353732 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:48 crc kubenswrapper[5103]: E0130 00:12:48.353818 5103 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:49 crc kubenswrapper[5103]: I0130 00:12:49.553234 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:52 crc kubenswrapper[5103]: I0130 00:12:52.521127 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-cnbd2_e1617c52-82bc-4480-9bc4-e37e0264876e/kube-multus-additional-cni-plugins/0.log" Jan 30 00:12:52 crc kubenswrapper[5103]: I0130 00:12:52.521594 5103 generic.go:358] "Generic (PLEG): container finished" podID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" exitCode=137 Jan 30 00:12:52 crc kubenswrapper[5103]: I0130 00:12:52.521781 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" event={"ID":"e1617c52-82bc-4480-9bc4-e37e0264876e","Type":"ContainerDied","Data":"6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9"} Jan 30 00:12:52 crc kubenswrapper[5103]: I0130 00:12:52.777413 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:52 crc kubenswrapper[5103]: I0130 00:12:52.777531 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:52 crc kubenswrapper[5103]: I0130 00:12:52.943647 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59950: no serving certificate available for the kubelet" Jan 30 00:12:58 crc kubenswrapper[5103]: E0130 00:12:58.346581 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9 is running failed: container process not found" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:58 crc kubenswrapper[5103]: E0130 00:12:58.347742 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9 is running failed: container process not found" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:58 crc kubenswrapper[5103]: E0130 00:12:58.348369 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9 is running failed: container process not found" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:58 crc kubenswrapper[5103]: E0130 00:12:58.348435 5103 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:59 crc kubenswrapper[5103]: I0130 00:12:59.335598 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:12:59 crc kubenswrapper[5103]: I0130 00:12:59.337269 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea21664f-12f0-4c35-bcb0-2f3b355f9153" containerName="pruner" Jan 30 00:12:59 crc kubenswrapper[5103]: I0130 00:12:59.337423 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea21664f-12f0-4c35-bcb0-2f3b355f9153" containerName="pruner" Jan 30 00:12:59 crc kubenswrapper[5103]: I0130 00:12:59.337623 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea21664f-12f0-4c35-bcb0-2f3b355f9153" containerName="pruner" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.627628 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.627822 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.631430 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.633273 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.736858 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.736936 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.838161 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.838653 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.838400 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.857789 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.960400 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:02 crc kubenswrapper[5103]: I0130 00:13:02.777397 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:02 crc kubenswrapper[5103]: I0130 00:13:02.777916 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.527694 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.729380 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.729667 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.797136 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-kubelet-dir\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.797211 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36d0743a-ddce-4bd2-8cca-44d42d9356da-kube-api-access\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.797249 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-var-lock\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.899330 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36d0743a-ddce-4bd2-8cca-44d42d9356da-kube-api-access\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.899429 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-var-lock\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.899515 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-kubelet-dir\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.899589 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-var-lock\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.899624 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-kubelet-dir\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.925179 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36d0743a-ddce-4bd2-8cca-44d42d9356da-kube-api-access\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:06 crc kubenswrapper[5103]: I0130 00:13:06.047578 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:07 crc kubenswrapper[5103]: I0130 00:13:07.935095 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-cnbd2_e1617c52-82bc-4480-9bc4-e37e0264876e/kube-multus-additional-cni-plugins/0.log" Jan 30 00:13:07 crc kubenswrapper[5103]: I0130 00:13:07.935225 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.028540 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1617c52-82bc-4480-9bc4-e37e0264876e-tuning-conf-dir\") pod \"e1617c52-82bc-4480-9bc4-e37e0264876e\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.028693 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1617c52-82bc-4480-9bc4-e37e0264876e-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "e1617c52-82bc-4480-9bc4-e37e0264876e" (UID: "e1617c52-82bc-4480-9bc4-e37e0264876e"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.028952 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e1617c52-82bc-4480-9bc4-e37e0264876e-ready\") pod \"e1617c52-82bc-4480-9bc4-e37e0264876e\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.028993 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e1617c52-82bc-4480-9bc4-e37e0264876e-cni-sysctl-allowlist\") pod \"e1617c52-82bc-4480-9bc4-e37e0264876e\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.029141 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbkqv\" (UniqueName: \"kubernetes.io/projected/e1617c52-82bc-4480-9bc4-e37e0264876e-kube-api-access-lbkqv\") pod \"e1617c52-82bc-4480-9bc4-e37e0264876e\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.029556 5103 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1617c52-82bc-4480-9bc4-e37e0264876e-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.029830 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1617c52-82bc-4480-9bc4-e37e0264876e-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "e1617c52-82bc-4480-9bc4-e37e0264876e" (UID: "e1617c52-82bc-4480-9bc4-e37e0264876e"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.029820 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1617c52-82bc-4480-9bc4-e37e0264876e-ready" (OuterVolumeSpecName: "ready") pod "e1617c52-82bc-4480-9bc4-e37e0264876e" (UID: "e1617c52-82bc-4480-9bc4-e37e0264876e"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.039881 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1617c52-82bc-4480-9bc4-e37e0264876e-kube-api-access-lbkqv" (OuterVolumeSpecName: "kube-api-access-lbkqv") pod "e1617c52-82bc-4480-9bc4-e37e0264876e" (UID: "e1617c52-82bc-4480-9bc4-e37e0264876e"). InnerVolumeSpecName "kube-api-access-lbkqv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.131789 5103 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e1617c52-82bc-4480-9bc4-e37e0264876e-ready\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.131839 5103 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e1617c52-82bc-4480-9bc4-e37e0264876e-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.131861 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lbkqv\" (UniqueName: \"kubernetes.io/projected/e1617c52-82bc-4480-9bc4-e37e0264876e-kube-api-access-lbkqv\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:08 crc kubenswrapper[5103]: W0130 00:13:08.498414 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-15e8ad65b1b52db89bfe310060212442902e4d367186e248b32d329456326bfa WatchSource:0}: Error finding container 15e8ad65b1b52db89bfe310060212442902e4d367186e248b32d329456326bfa: Status 404 returned error can't find the container with id 15e8ad65b1b52db89bfe310060212442902e4d367186e248b32d329456326bfa Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.574368 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vsrcq"] Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.640929 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerStarted","Data":"fb88c4f3139f356c291358cb8aa4fa4cf78be8f2c5f1ebfdbfda9547ff3a84f8"} Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.641882 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" event={"ID":"566ee5b2-938f-41f6-8625-e8a987181d60","Type":"ContainerStarted","Data":"d4608abd8fe0941f7b6442e65d03e4a4c7fe4f59ac5332172c75cf635de5a05a"} Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.643187 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-cnbd2_e1617c52-82bc-4480-9bc4-e37e0264876e/kube-multus-additional-cni-plugins/0.log" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.643306 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" event={"ID":"e1617c52-82bc-4480-9bc4-e37e0264876e","Type":"ContainerDied","Data":"973863cd6d6133ec3ff6a7fd2a13f58a8dd52f466be2fd39e8f85026734e7547"} Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.643328 5103 scope.go:117] "RemoveContainer" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.643389 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.653552 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"15e8ad65b1b52db89bfe310060212442902e4d367186e248b32d329456326bfa"} Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.684227 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cnbd2"] Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.687962 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cnbd2"] Jan 30 00:13:08 crc kubenswrapper[5103]: W0130 00:13:08.846187 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-bda91773f6a5fc8744515643c8b1fdcb9b6ee6637bc16770952e46645f05d019 WatchSource:0}: Error finding container bda91773f6a5fc8744515643c8b1fdcb9b6ee6637bc16770952e46645f05d019: Status 404 returned error can't find the container with id bda91773f6a5fc8744515643c8b1fdcb9b6ee6637bc16770952e46645f05d019 Jan 30 00:13:08 crc kubenswrapper[5103]: W0130 00:13:08.847386 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd69ff998_a349_40e4_8653_bfded7d60952.slice/crio-4e145232ebdfb182b6a3d1e5a1b96cd199f982d856f76867803b018fe8ea7f1d WatchSource:0}: Error finding container 4e145232ebdfb182b6a3d1e5a1b96cd199f982d856f76867803b018fe8ea7f1d: Status 404 returned error can't find the container with id 4e145232ebdfb182b6a3d1e5a1b96cd199f982d856f76867803b018fe8ea7f1d Jan 30 00:13:09 crc kubenswrapper[5103]: W0130 00:13:09.026346 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda8e87128_3548_4aa6_97ae_4fbdebabb51b.slice/crio-9ca6233bea7004be5a192952ea083905d30a402b99b8bda22756390d149198c2 WatchSource:0}: Error finding container 9ca6233bea7004be5a192952ea083905d30a402b99b8bda22756390d149198c2: Status 404 returned error can't find the container with id 9ca6233bea7004be5a192952ea083905d30a402b99b8bda22756390d149198c2 Jan 30 00:13:09 crc kubenswrapper[5103]: I0130 00:13:09.666467 5103 generic.go:358] "Generic (PLEG): container finished" podID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerID="1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5" exitCode=0 Jan 30 00:13:09 crc kubenswrapper[5103]: I0130 00:13:09.673851 5103 generic.go:358] "Generic (PLEG): container finished" podID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerID="92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292" exitCode=0 Jan 30 00:13:09 crc kubenswrapper[5103]: I0130 00:13:09.716724 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:09 crc kubenswrapper[5103]: I0130 00:13:09.716798 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.274041 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"36d0743a-ddce-4bd2-8cca-44d42d9356da","Type":"ContainerStarted","Data":"d26e5dfec08469fb58fbea2e743d80f53ef5ef562fb067297ab9ef35b80c7464"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.274112 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.274195 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbjkv" event={"ID":"9807e5f5-fa63-4e0c-9b52-3c0044337c40","Type":"ContainerDied","Data":"1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.274216 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a8e87128-3548-4aa6-97ae-4fbdebabb51b","Type":"ContainerStarted","Data":"9ca6233bea7004be5a192952ea083905d30a402b99b8bda22756390d149198c2"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.283678 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" path="/var/lib/kubelet/pods/e1617c52-82bc-4480-9bc4-e37e0264876e/volumes" Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284355 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" event={"ID":"d69ff998-a349-40e4-8653-bfded7d60952","Type":"ContainerStarted","Data":"4e145232ebdfb182b6a3d1e5a1b96cd199f982d856f76867803b018fe8ea7f1d"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284389 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59s8" event={"ID":"c312b248-250c-4b33-9c7a-f79c1e73a75b","Type":"ContainerDied","Data":"92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284405 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jfm6p"] Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284419 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"bda91773f6a5fc8744515643c8b1fdcb9b6ee6637bc16770952e46645f05d019"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284432 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284442 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" event={"ID":"fe0b1692-3dd7-4854-b53d-c32cd8162e1b","Type":"ContainerStarted","Data":"64cb928b977091387595e423e5a54903621b359f7992c380e3153e8a477eefa3"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284456 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284475 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"a1300b5c3d788e6b60e029e3a403486ce2ec566c355064d07eac6df679192d2b"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.699320 5103 generic.go:358] "Generic (PLEG): container finished" podID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerID="a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be" exitCode=0 Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.699450 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c7gb" event={"ID":"ebb7f7db-c773-49f6-b58b-6bd929f25f3a","Type":"ContainerDied","Data":"a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.703203 5103 generic.go:358] "Generic (PLEG): container finished" podID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerID="b3c22393fbe801c108dcbbddf3bbfaff0479ecc6408293676bc4b5895feac0f7" exitCode=0 Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.703383 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj2cx" event={"ID":"3ce63351-9fca-4e0e-b4fb-3032a983ebcc","Type":"ContainerDied","Data":"b3c22393fbe801c108dcbbddf3bbfaff0479ecc6408293676bc4b5895feac0f7"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.705353 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"4d5ffd7684d68fe0303385b117e52f80b8bcedc1577f8188ae6e3d7ce592db56"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.707541 5103 generic.go:358] "Generic (PLEG): container finished" podID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerID="f2c861c6db2293d2cedb84994b2e896c4b940bfa88eb6866c500110833076dd3" exitCode=0 Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.707775 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzx54" event={"ID":"faf9931f-40f0-4d66-b375-89bec91fd6b8","Type":"ContainerDied","Data":"f2c861c6db2293d2cedb84994b2e896c4b940bfa88eb6866c500110833076dd3"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.711041 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerStarted","Data":"f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.717681 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerStarted","Data":"abfd0b471685a353ec69c6889c8ef870bc8a246f713d8c033c6ef4c6cea8cbc2"} Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.004807 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.004894 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.726421 5103 generic.go:358] "Generic (PLEG): container finished" podID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerID="f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708" exitCode=0 Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.726518 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerDied","Data":"f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708"} Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.731139 5103 generic.go:358] "Generic (PLEG): container finished" podID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerID="abfd0b471685a353ec69c6889c8ef870bc8a246f713d8c033c6ef4c6cea8cbc2" exitCode=0 Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.731363 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerDied","Data":"abfd0b471685a353ec69c6889c8ef870bc8a246f713d8c033c6ef4c6cea8cbc2"} Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.733031 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerStarted","Data":"ed2bfc2b73398c1889d5a77eddd6e0ef71fb44a17294819baf27f50345e4955f"} Jan 30 00:13:12 crc kubenswrapper[5103]: I0130 00:13:12.741419 5103 generic.go:358] "Generic (PLEG): container finished" podID="096edab0-9031-4bcd-8451-a93417372ee1" containerID="ed2bfc2b73398c1889d5a77eddd6e0ef71fb44a17294819baf27f50345e4955f" exitCode=0 Jan 30 00:13:12 crc kubenswrapper[5103]: I0130 00:13:12.741499 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerDied","Data":"ed2bfc2b73398c1889d5a77eddd6e0ef71fb44a17294819baf27f50345e4955f"} Jan 30 00:13:12 crc kubenswrapper[5103]: I0130 00:13:12.743735 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"be400c94a1a85e05f6226e648d9c94032f43d6ef128d7bb3dc7c74aff25e68bd"} Jan 30 00:13:12 crc kubenswrapper[5103]: I0130 00:13:12.775321 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:12 crc kubenswrapper[5103]: I0130 00:13:12.775389 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:13 crc kubenswrapper[5103]: I0130 00:13:13.751587 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" event={"ID":"566ee5b2-938f-41f6-8625-e8a987181d60","Type":"ContainerStarted","Data":"a42e6af7e4fdd14b0555dbc45cc5b48df70e1022fde98251062f220847d01610"} Jan 30 00:13:14 crc kubenswrapper[5103]: I0130 00:13:14.758205 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"2a1d2a6fd9b0415c90f46e84f4dbf0c0ca79a15746a84e5e6dd0f2a6d613540a"} Jan 30 00:13:14 crc kubenswrapper[5103]: I0130 00:13:14.759901 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"36d0743a-ddce-4bd2-8cca-44d42d9356da","Type":"ContainerStarted","Data":"7d8a9b02754e84af20022228fe1cad64203a3b40b3a8196d40d777d92317e4f3"} Jan 30 00:13:14 crc kubenswrapper[5103]: I0130 00:13:14.761575 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" event={"ID":"d69ff998-a349-40e4-8653-bfded7d60952","Type":"ContainerStarted","Data":"ffcde02830ce4ad7b97b4b84ec1411fc924348315e06fb6b2821c02bafdfedc3"} Jan 30 00:13:15 crc kubenswrapper[5103]: I0130 00:13:15.173830 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:16 crc kubenswrapper[5103]: I0130 00:13:16.395561 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:13:16 crc kubenswrapper[5103]: I0130 00:13:16.432661 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" podStartSLOduration=163.432630167 podStartE2EDuration="2m43.432630167s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:16.428706891 +0000 UTC m=+186.300205003" watchObservedRunningTime="2026-01-30 00:13:16.432630167 +0000 UTC m=+186.304128259" Jan 30 00:13:16 crc kubenswrapper[5103]: I0130 00:13:16.776543 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a8e87128-3548-4aa6-97ae-4fbdebabb51b","Type":"ContainerStarted","Data":"38b7745e7a20717b902287280a735e1afd97d82ade62ae000080b58a64c6ef28"} Jan 30 00:13:17 crc kubenswrapper[5103]: I0130 00:13:17.787461 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" event={"ID":"fe0b1692-3dd7-4854-b53d-c32cd8162e1b","Type":"ContainerStarted","Data":"d53e734783ab297ba9e52fe92a54022392d3212be964e499bd29b942fa8453ef"} Jan 30 00:13:18 crc kubenswrapper[5103]: I0130 00:13:18.798168 5103 generic.go:358] "Generic (PLEG): container finished" podID="a8e87128-3548-4aa6-97ae-4fbdebabb51b" containerID="38b7745e7a20717b902287280a735e1afd97d82ade62ae000080b58a64c6ef28" exitCode=0 Jan 30 00:13:18 crc kubenswrapper[5103]: I0130 00:13:18.798282 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a8e87128-3548-4aa6-97ae-4fbdebabb51b","Type":"ContainerDied","Data":"38b7745e7a20717b902287280a735e1afd97d82ade62ae000080b58a64c6ef28"} Jan 30 00:13:18 crc kubenswrapper[5103]: I0130 00:13:18.803942 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59s8" event={"ID":"c312b248-250c-4b33-9c7a-f79c1e73a75b","Type":"ContainerStarted","Data":"9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39"} Jan 30 00:13:19 crc kubenswrapper[5103]: I0130 00:13:19.815815 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbjkv" event={"ID":"9807e5f5-fa63-4e0c-9b52-3c0044337c40","Type":"ContainerStarted","Data":"775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147"} Jan 30 00:13:19 crc kubenswrapper[5103]: I0130 00:13:19.923564 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=20.923540831 podStartE2EDuration="20.923540831s" podCreationTimestamp="2026-01-30 00:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:19.919099493 +0000 UTC m=+189.790597615" watchObservedRunningTime="2026-01-30 00:13:19.923540831 +0000 UTC m=+189.795038893" Jan 30 00:13:20 crc kubenswrapper[5103]: I0130 00:13:20.827607 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj2cx" event={"ID":"3ce63351-9fca-4e0e-b4fb-3032a983ebcc","Type":"ContainerStarted","Data":"61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483"} Jan 30 00:13:21 crc kubenswrapper[5103]: I0130 00:13:21.006218 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:21 crc kubenswrapper[5103]: I0130 00:13:21.006741 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:21 crc kubenswrapper[5103]: I0130 00:13:21.425667 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" podStartSLOduration=80.425630105 podStartE2EDuration="1m20.425630105s" podCreationTimestamp="2026-01-30 00:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:21.422141 +0000 UTC m=+191.293639062" watchObservedRunningTime="2026-01-30 00:13:21.425630105 +0000 UTC m=+191.297128207" Jan 30 00:13:21 crc kubenswrapper[5103]: I0130 00:13:21.445530 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=16.445505928 podStartE2EDuration="16.445505928s" podCreationTimestamp="2026-01-30 00:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:21.440076166 +0000 UTC m=+191.311574228" watchObservedRunningTime="2026-01-30 00:13:21.445505928 +0000 UTC m=+191.317003990" Jan 30 00:13:21 crc kubenswrapper[5103]: I0130 00:13:21.840361 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzx54" event={"ID":"faf9931f-40f0-4d66-b375-89bec91fd6b8","Type":"ContainerStarted","Data":"181f0aa32a1598e1078e06c18371f78494d47cd1fbe974edab50a99336c9d2fb"} Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.336644 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nbjkv" podStartSLOduration=20.047607022 podStartE2EDuration="1m2.336605691s" podCreationTimestamp="2026-01-30 00:12:20 +0000 UTC" firstStartedPulling="2026-01-30 00:12:24.702890404 +0000 UTC m=+134.574388496" lastFinishedPulling="2026-01-30 00:13:06.991889073 +0000 UTC m=+176.863387165" observedRunningTime="2026-01-30 00:13:22.328544245 +0000 UTC m=+192.200042357" watchObservedRunningTime="2026-01-30 00:13:22.336605691 +0000 UTC m=+192.208103783" Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.377547 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z59s8" podStartSLOduration=19.743292413 podStartE2EDuration="1m1.377519916s" podCreationTimestamp="2026-01-30 00:12:21 +0000 UTC" firstStartedPulling="2026-01-30 00:12:26.281350835 +0000 UTC m=+136.152848887" lastFinishedPulling="2026-01-30 00:13:07.915578298 +0000 UTC m=+177.787076390" observedRunningTime="2026-01-30 00:13:22.374191685 +0000 UTC m=+192.245689797" watchObservedRunningTime="2026-01-30 00:13:22.377519916 +0000 UTC m=+192.249017988" Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.478604 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.479009 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.775527 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.775631 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.850900 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c7gb" event={"ID":"ebb7f7db-c773-49f6-b58b-6bd929f25f3a","Type":"ContainerStarted","Data":"7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858"} Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.054326 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.096882 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kube-api-access\") pod \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.096998 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kubelet-dir\") pod \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.097459 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a8e87128-3548-4aa6-97ae-4fbdebabb51b" (UID: "a8e87128-3548-4aa6-97ae-4fbdebabb51b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.108560 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a8e87128-3548-4aa6-97ae-4fbdebabb51b" (UID: "a8e87128-3548-4aa6-97ae-4fbdebabb51b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.153200 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qj2cx" podStartSLOduration=20.937751093 podStartE2EDuration="1m3.153185641s" podCreationTimestamp="2026-01-30 00:12:20 +0000 UTC" firstStartedPulling="2026-01-30 00:12:26.31279064 +0000 UTC m=+136.184288692" lastFinishedPulling="2026-01-30 00:13:08.528225188 +0000 UTC m=+178.399723240" observedRunningTime="2026-01-30 00:13:23.151771437 +0000 UTC m=+193.023269499" watchObservedRunningTime="2026-01-30 00:13:23.153185641 +0000 UTC m=+193.024683693" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.173231 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vzx54" podStartSLOduration=20.380790706 podStartE2EDuration="1m3.173210238s" podCreationTimestamp="2026-01-30 00:12:20 +0000 UTC" firstStartedPulling="2026-01-30 00:12:25.726162762 +0000 UTC m=+135.597660814" lastFinishedPulling="2026-01-30 00:13:08.518582294 +0000 UTC m=+178.390080346" observedRunningTime="2026-01-30 00:13:23.170728588 +0000 UTC m=+193.042226730" watchObservedRunningTime="2026-01-30 00:13:23.173210238 +0000 UTC m=+193.044708290" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.198649 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.198709 5103 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.858373 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a8e87128-3548-4aa6-97ae-4fbdebabb51b","Type":"ContainerDied","Data":"9ca6233bea7004be5a192952ea083905d30a402b99b8bda22756390d149198c2"} Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.859781 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ca6233bea7004be5a192952ea083905d30a402b99b8bda22756390d149198c2" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.858526 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.860801 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerStarted","Data":"ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0"} Jan 30 00:13:25 crc kubenswrapper[5103]: I0130 00:13:25.138408 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-z59s8" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="registry-server" probeResult="failure" output=< Jan 30 00:13:25 crc kubenswrapper[5103]: timeout: failed to connect service ":50051" within 1s Jan 30 00:13:25 crc kubenswrapper[5103]: > Jan 30 00:13:25 crc kubenswrapper[5103]: I0130 00:13:25.472624 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2rjzw" podStartSLOduration=21.267830466 podStartE2EDuration="1m2.472607243s" podCreationTimestamp="2026-01-30 00:12:23 +0000 UTC" firstStartedPulling="2026-01-30 00:12:27.333482525 +0000 UTC m=+137.204980597" lastFinishedPulling="2026-01-30 00:13:08.538259322 +0000 UTC m=+178.409757374" observedRunningTime="2026-01-30 00:13:25.471480076 +0000 UTC m=+195.342978138" watchObservedRunningTime="2026-01-30 00:13:25.472607243 +0000 UTC m=+195.344105295" Jan 30 00:13:25 crc kubenswrapper[5103]: I0130 00:13:25.502489 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7c7gb" podStartSLOduration=21.76005582 podStartE2EDuration="1m6.502467979s" podCreationTimestamp="2026-01-30 00:12:19 +0000 UTC" firstStartedPulling="2026-01-30 00:12:23.587271191 +0000 UTC m=+133.458769243" lastFinishedPulling="2026-01-30 00:13:08.32968334 +0000 UTC m=+178.201181402" observedRunningTime="2026-01-30 00:13:25.499933008 +0000 UTC m=+195.371431070" watchObservedRunningTime="2026-01-30 00:13:25.502467979 +0000 UTC m=+195.373966031" Jan 30 00:13:25 crc kubenswrapper[5103]: I0130 00:13:25.872747 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" event={"ID":"566ee5b2-938f-41f6-8625-e8a987181d60","Type":"ContainerStarted","Data":"0259a9e9eace4fc172ce32f2b8eecdb8ae6d65184d193746feff43d1d4feb368"} Jan 30 00:13:25 crc kubenswrapper[5103]: I0130 00:13:25.874775 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerStarted","Data":"372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085"} Jan 30 00:13:25 crc kubenswrapper[5103]: I0130 00:13:25.892464 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xpqb7" podStartSLOduration=22.711067996 podStartE2EDuration="1m3.892441114s" podCreationTimestamp="2026-01-30 00:12:22 +0000 UTC" firstStartedPulling="2026-01-30 00:12:27.336634642 +0000 UTC m=+137.208132694" lastFinishedPulling="2026-01-30 00:13:08.51800775 +0000 UTC m=+178.389505812" observedRunningTime="2026-01-30 00:13:25.890319203 +0000 UTC m=+195.761817255" watchObservedRunningTime="2026-01-30 00:13:25.892441114 +0000 UTC m=+195.763939186" Jan 30 00:13:26 crc kubenswrapper[5103]: I0130 00:13:26.882545 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerStarted","Data":"3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a"} Jan 30 00:13:26 crc kubenswrapper[5103]: I0130 00:13:26.897317 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vsrcq" podStartSLOduration=173.897294844 podStartE2EDuration="2m53.897294844s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:26.895504321 +0000 UTC m=+196.767002413" watchObservedRunningTime="2026-01-30 00:13:26.897294844 +0000 UTC m=+196.768792906" Jan 30 00:13:27 crc kubenswrapper[5103]: I0130 00:13:27.924467 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bhpd7" podStartSLOduration=23.69411163 podStartE2EDuration="1m4.924446219s" podCreationTimestamp="2026-01-30 00:12:23 +0000 UTC" firstStartedPulling="2026-01-30 00:12:27.324611979 +0000 UTC m=+137.196110031" lastFinishedPulling="2026-01-30 00:13:08.554946568 +0000 UTC m=+178.426444620" observedRunningTime="2026-01-30 00:13:27.921341293 +0000 UTC m=+197.792839355" watchObservedRunningTime="2026-01-30 00:13:27.924446219 +0000 UTC m=+197.795944281" Jan 30 00:13:29 crc kubenswrapper[5103]: I0130 00:13:29.897847 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:13:30 crc kubenswrapper[5103]: I0130 00:13:30.883887 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:13:30 crc kubenswrapper[5103]: I0130 00:13:30.884839 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:13:30 crc kubenswrapper[5103]: I0130 00:13:30.972509 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.005044 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.005493 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.451011 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.452773 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.514696 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.816815 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.817293 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.869019 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.957408 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.964157 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.978600 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.289827 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.289883 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.343420 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.529410 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.570900 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.775977 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.776362 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.776421 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.776920 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.777068 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.777003 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"fb88c4f3139f356c291358cb8aa4fa4cf78be8f2c5f1ebfdbfda9547ff3a84f8"} pod="openshift-console/downloads-747b44746d-j77tr" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.777222 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" containerID="cri-o://fb88c4f3139f356c291358cb8aa4fa4cf78be8f2c5f1ebfdbfda9547ff3a84f8" gracePeriod=2 Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.958351 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:13:33 crc kubenswrapper[5103]: I0130 00:13:33.931916 5103 ???:1] "http: TLS handshake error from 192.168.126.11:54516: no serving certificate available for the kubelet" Jan 30 00:13:33 crc kubenswrapper[5103]: I0130 00:13:33.937186 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:13:33 crc kubenswrapper[5103]: I0130 00:13:33.938379 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.016122 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.211396 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzx54"] Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.212396 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vzx54" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="registry-server" containerID="cri-o://181f0aa32a1598e1078e06c18371f78494d47cd1fbe974edab50a99336c9d2fb" gracePeriod=2 Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.936388 5103 generic.go:358] "Generic (PLEG): container finished" podID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerID="fb88c4f3139f356c291358cb8aa4fa4cf78be8f2c5f1ebfdbfda9547ff3a84f8" exitCode=0 Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.937426 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerDied","Data":"fb88c4f3139f356c291358cb8aa4fa4cf78be8f2c5f1ebfdbfda9547ff3a84f8"} Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.937603 5103 scope.go:117] "RemoveContainer" containerID="cf440fc95fced9c1dec5f756ce0700f4d01d4bcefdae5034ff9f16546ffccb75" Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.992760 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:13:35 crc kubenswrapper[5103]: I0130 00:13:35.062781 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:13:35 crc kubenswrapper[5103]: I0130 00:13:35.062868 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:13:35 crc kubenswrapper[5103]: I0130 00:13:35.132231 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:13:36 crc kubenswrapper[5103]: I0130 00:13:36.011960 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:13:36 crc kubenswrapper[5103]: I0130 00:13:36.049747 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:13:36 crc kubenswrapper[5103]: I0130 00:13:36.049832 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:13:36 crc kubenswrapper[5103]: I0130 00:13:36.105781 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:13:36 crc kubenswrapper[5103]: I0130 00:13:36.615239 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qj2cx"] Jan 30 00:13:36 crc kubenswrapper[5103]: I0130 00:13:36.618946 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qj2cx" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="registry-server" containerID="cri-o://61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483" gracePeriod=2 Jan 30 00:13:37 crc kubenswrapper[5103]: I0130 00:13:37.018875 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:13:37 crc kubenswrapper[5103]: I0130 00:13:37.978391 5103 generic.go:358] "Generic (PLEG): container finished" podID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerID="181f0aa32a1598e1078e06c18371f78494d47cd1fbe974edab50a99336c9d2fb" exitCode=0 Jan 30 00:13:37 crc kubenswrapper[5103]: I0130 00:13:37.978518 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzx54" event={"ID":"faf9931f-40f0-4d66-b375-89bec91fd6b8","Type":"ContainerDied","Data":"181f0aa32a1598e1078e06c18371f78494d47cd1fbe974edab50a99336c9d2fb"} Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.832444 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.932803 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftd25\" (UniqueName: \"kubernetes.io/projected/faf9931f-40f0-4d66-b375-89bec91fd6b8-kube-api-access-ftd25\") pod \"faf9931f-40f0-4d66-b375-89bec91fd6b8\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.932948 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-catalog-content\") pod \"faf9931f-40f0-4d66-b375-89bec91fd6b8\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.932973 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-utilities\") pod \"faf9931f-40f0-4d66-b375-89bec91fd6b8\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.935399 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-utilities" (OuterVolumeSpecName: "utilities") pod "faf9931f-40f0-4d66-b375-89bec91fd6b8" (UID: "faf9931f-40f0-4d66-b375-89bec91fd6b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.939632 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf9931f-40f0-4d66-b375-89bec91fd6b8-kube-api-access-ftd25" (OuterVolumeSpecName: "kube-api-access-ftd25") pod "faf9931f-40f0-4d66-b375-89bec91fd6b8" (UID: "faf9931f-40f0-4d66-b375-89bec91fd6b8"). InnerVolumeSpecName "kube-api-access-ftd25". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.992303 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerStarted","Data":"59c82ec1806032dbe373182f75befd328452c9166c192bca363da6c02d99e1c0"} Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.995007 5103 generic.go:358] "Generic (PLEG): container finished" podID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerID="61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483" exitCode=0 Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.995127 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj2cx" event={"ID":"3ce63351-9fca-4e0e-b4fb-3032a983ebcc","Type":"ContainerDied","Data":"61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483"} Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.998642 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzx54" event={"ID":"faf9931f-40f0-4d66-b375-89bec91fd6b8","Type":"ContainerDied","Data":"efd1833ce5dba5fe0d8d29ba5d602d25ca88ad5bac471d3550f1eabc547727e3"} Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.998663 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.998729 5103 scope.go:117] "RemoveContainer" containerID="181f0aa32a1598e1078e06c18371f78494d47cd1fbe974edab50a99336c9d2fb" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.019604 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpqb7"] Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.020018 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xpqb7" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="registry-server" containerID="cri-o://372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" gracePeriod=2 Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.020599 5103 scope.go:117] "RemoveContainer" containerID="f2c861c6db2293d2cedb84994b2e896c4b940bfa88eb6866c500110833076dd3" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.034931 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.034977 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftd25\" (UniqueName: \"kubernetes.io/projected/faf9931f-40f0-4d66-b375-89bec91fd6b8-kube-api-access-ftd25\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.041879 5103 scope.go:117] "RemoveContainer" containerID="9b8ee9cc3437496d869aca397a52ca77f07188d54f568012703f601a70efc9d2" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.255311 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faf9931f-40f0-4d66-b375-89bec91fd6b8" (UID: "faf9931f-40f0-4d66-b375-89bec91fd6b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.339317 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzx54"] Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.340493 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.342812 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vzx54"] Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.622557 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bhpd7"] Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.622901 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bhpd7" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="registry-server" containerID="cri-o://3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a" gracePeriod=2 Jan 30 00:13:40 crc kubenswrapper[5103]: I0130 00:13:40.009866 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:13:40 crc kubenswrapper[5103]: I0130 00:13:40.010215 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:40 crc kubenswrapper[5103]: I0130 00:13:40.010288 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5103]: I0130 00:13:40.884986 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" path="/var/lib/kubelet/pods/faf9931f-40f0-4d66-b375-89bec91fd6b8/volumes" Jan 30 00:13:41 crc kubenswrapper[5103]: I0130 00:13:41.017352 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:41 crc kubenswrapper[5103]: I0130 00:13:41.017440 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:42 crc kubenswrapper[5103]: I0130 00:13:42.775073 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:42 crc kubenswrapper[5103]: I0130 00:13:42.775185 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:42 crc kubenswrapper[5103]: E0130 00:13:42.921377 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483 is running failed: container process not found" containerID="61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:42 crc kubenswrapper[5103]: E0130 00:13:42.922637 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483 is running failed: container process not found" containerID="61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:42 crc kubenswrapper[5103]: E0130 00:13:42.923082 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483 is running failed: container process not found" containerID="61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:42 crc kubenswrapper[5103]: E0130 00:13:42.923118 5103 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-qj2cx" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="registry-server" probeResult="unknown" Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.385037 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.503309 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-utilities\") pod \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.503442 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-catalog-content\") pod \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.503619 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rhcn\" (UniqueName: \"kubernetes.io/projected/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-kube-api-access-2rhcn\") pod \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.505088 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-utilities" (OuterVolumeSpecName: "utilities") pod "3ce63351-9fca-4e0e-b4fb-3032a983ebcc" (UID: "3ce63351-9fca-4e0e-b4fb-3032a983ebcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.510509 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-kube-api-access-2rhcn" (OuterVolumeSpecName: "kube-api-access-2rhcn") pod "3ce63351-9fca-4e0e-b4fb-3032a983ebcc" (UID: "3ce63351-9fca-4e0e-b4fb-3032a983ebcc"). InnerVolumeSpecName "kube-api-access-2rhcn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.606109 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2rhcn\" (UniqueName: \"kubernetes.io/projected/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-kube-api-access-2rhcn\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.606181 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:44 crc kubenswrapper[5103]: E0130 00:13:44.939255 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:44 crc kubenswrapper[5103]: E0130 00:13:44.940463 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:44 crc kubenswrapper[5103]: E0130 00:13:44.940886 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:44 crc kubenswrapper[5103]: E0130 00:13:44.941182 5103 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-xpqb7" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="registry-server" probeResult="unknown" Jan 30 00:13:45 crc kubenswrapper[5103]: I0130 00:13:45.614937 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ce63351-9fca-4e0e-b4fb-3032a983ebcc" (UID: "3ce63351-9fca-4e0e-b4fb-3032a983ebcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5103]: I0130 00:13:45.634436 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.743824 5103 generic.go:358] "Generic (PLEG): container finished" podID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" exitCode=0 Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.743943 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerDied","Data":"372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085"} Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.749470 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj2cx" event={"ID":"3ce63351-9fca-4e0e-b4fb-3032a983ebcc","Type":"ContainerDied","Data":"d31bb6a2f9fb799d1f7776dc6dbb0a5dcdd009e2858db6301a056354672735ba"} Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.749878 5103 scope.go:117] "RemoveContainer" containerID="61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483" Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.749555 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.771694 5103 scope.go:117] "RemoveContainer" containerID="b3c22393fbe801c108dcbbddf3bbfaff0479ecc6408293676bc4b5895feac0f7" Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.799478 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qj2cx"] Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.807524 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qj2cx"] Jan 30 00:13:46 crc kubenswrapper[5103]: E0130 00:13:46.969455 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a is running failed: container process not found" containerID="3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:46 crc kubenswrapper[5103]: E0130 00:13:46.970438 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a is running failed: container process not found" containerID="3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:46 crc kubenswrapper[5103]: E0130 00:13:46.971095 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a is running failed: container process not found" containerID="3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:46 crc kubenswrapper[5103]: E0130 00:13:46.971221 5103 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-bhpd7" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="registry-server" probeResult="unknown" Jan 30 00:13:47 crc kubenswrapper[5103]: I0130 00:13:47.194859 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" path="/var/lib/kubelet/pods/3ce63351-9fca-4e0e-b4fb-3032a983ebcc/volumes" Jan 30 00:13:47 crc kubenswrapper[5103]: I0130 00:13:47.718447 5103 scope.go:117] "RemoveContainer" containerID="650d7faa5f4f892e52058d54951c121f7eb03b49005bdf02d2d0dcbf11476748" Jan 30 00:13:47 crc kubenswrapper[5103]: I0130 00:13:47.736881 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:51 crc kubenswrapper[5103]: I0130 00:13:51.018021 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:51 crc kubenswrapper[5103]: I0130 00:13:51.018208 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5103]: I0130 00:13:51.508209 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bhpd7_096edab0-9031-4bcd-8451-a93417372ee1/registry-server/0.log" Jan 30 00:13:51 crc kubenswrapper[5103]: I0130 00:13:51.510007 5103 generic.go:358] "Generic (PLEG): container finished" podID="096edab0-9031-4bcd-8451-a93417372ee1" containerID="3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a" exitCode=-1 Jan 30 00:13:51 crc kubenswrapper[5103]: I0130 00:13:51.510125 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerDied","Data":"3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a"} Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.776334 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.776674 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.983550 5103 trace.go:236] Trace[1622085638]: "Calculate volume metrics of trusted-ca for pod openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" (30-Jan-2026 00:13:51.665) (total time: 1318ms): Jan 30 00:13:52 crc kubenswrapper[5103]: Trace[1622085638]: [1.318488432s] [1.318488432s] END Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.993199 5103 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994405 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="registry-server" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994451 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="registry-server" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994500 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="extract-utilities" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994514 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="extract-utilities" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994537 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="extract-content" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994551 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="extract-content" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994571 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994584 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994601 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="registry-server" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994613 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="registry-server" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994640 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="extract-utilities" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994652 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="extract-utilities" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994675 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a8e87128-3548-4aa6-97ae-4fbdebabb51b" containerName="pruner" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994686 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e87128-3548-4aa6-97ae-4fbdebabb51b" containerName="pruner" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994732 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="extract-content" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994745 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="extract-content" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994953 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994993 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="a8e87128-3548-4aa6-97ae-4fbdebabb51b" containerName="pruner" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.995015 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="registry-server" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.995036 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="registry-server" Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.787546 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bhpd7_096edab0-9031-4bcd-8451-a93417372ee1/registry-server/0.log" Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.789567 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.857750 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nnsl\" (UniqueName: \"kubernetes.io/projected/096edab0-9031-4bcd-8451-a93417372ee1-kube-api-access-6nnsl\") pod \"096edab0-9031-4bcd-8451-a93417372ee1\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.857934 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-utilities\") pod \"096edab0-9031-4bcd-8451-a93417372ee1\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.858116 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-catalog-content\") pod \"096edab0-9031-4bcd-8451-a93417372ee1\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.861105 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-utilities" (OuterVolumeSpecName: "utilities") pod "096edab0-9031-4bcd-8451-a93417372ee1" (UID: "096edab0-9031-4bcd-8451-a93417372ee1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.870768 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096edab0-9031-4bcd-8451-a93417372ee1-kube-api-access-6nnsl" (OuterVolumeSpecName: "kube-api-access-6nnsl") pod "096edab0-9031-4bcd-8451-a93417372ee1" (UID: "096edab0-9031-4bcd-8451-a93417372ee1"). InnerVolumeSpecName "kube-api-access-6nnsl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.961446 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.961504 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6nnsl\" (UniqueName: \"kubernetes.io/projected/096edab0-9031-4bcd-8451-a93417372ee1-kube-api-access-6nnsl\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:54 crc kubenswrapper[5103]: I0130 00:13:54.041511 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bhpd7_096edab0-9031-4bcd-8451-a93417372ee1/registry-server/0.log" Jan 30 00:13:54 crc kubenswrapper[5103]: E0130 00:13:54.939345 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:54 crc kubenswrapper[5103]: E0130 00:13:54.939885 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:54 crc kubenswrapper[5103]: E0130 00:13:54.940747 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:54 crc kubenswrapper[5103]: E0130 00:13:54.940787 5103 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-xpqb7" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="registry-server" probeResult="unknown" Jan 30 00:13:54 crc kubenswrapper[5103]: I0130 00:13:54.991907 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.081838 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-utilities\") pod \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.081939 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f77zp\" (UniqueName: \"kubernetes.io/projected/3d4d4fce-00ed-4163-8a52-864aa4d324e6-kube-api-access-f77zp\") pod \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.082109 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-catalog-content\") pod \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.083739 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-utilities" (OuterVolumeSpecName: "utilities") pod "3d4d4fce-00ed-4163-8a52-864aa4d324e6" (UID: "3d4d4fce-00ed-4163-8a52-864aa4d324e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.088740 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4d4fce-00ed-4163-8a52-864aa4d324e6-kube-api-access-f77zp" (OuterVolumeSpecName: "kube-api-access-f77zp") pod "3d4d4fce-00ed-4163-8a52-864aa4d324e6" (UID: "3d4d4fce-00ed-4163-8a52-864aa4d324e6"). InnerVolumeSpecName "kube-api-access-f77zp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.092808 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d4d4fce-00ed-4163-8a52-864aa4d324e6" (UID: "3d4d4fce-00ed-4163-8a52-864aa4d324e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.183659 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.183707 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f77zp\" (UniqueName: \"kubernetes.io/projected/3d4d4fce-00ed-4163-8a52-864aa4d324e6-kube-api-access-f77zp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.183719 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:56 crc kubenswrapper[5103]: I0130 00:13:56.450003 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "096edab0-9031-4bcd-8451-a93417372ee1" (UID: "096edab0-9031-4bcd-8451-a93417372ee1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:56 crc kubenswrapper[5103]: I0130 00:13:56.502006 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:58 crc kubenswrapper[5103]: I0130 00:13:58.493711 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:13:58 crc kubenswrapper[5103]: I0130 00:13:58.494343 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:13:59 crc kubenswrapper[5103]: I0130 00:13:59.091176 5103 generic.go:358] "Generic (PLEG): container finished" podID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" containerID="14c110c2aafcebf401f14c4e8482618b6d3c8697a12a7383624870029d5a39de" exitCode=0 Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.017361 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.017462 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.609456 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerDied","Data":"d4066f004b894aa275e8f17ab459177453c90a0d285ce8521d3e860edb7bf0cf"} Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.609594 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerDied","Data":"7644a5d832213c30e73c7160330a4d5f9a395115e8e0a49061670b16a87be474"} Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.609666 5103 scope.go:117] "RemoveContainer" containerID="3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.609696 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.609696 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.610874 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.629636 5103 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.629690 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-x6t57" event={"ID":"c5938973-a6f9-4d60-b605-3f02b2c1c84f","Type":"ContainerDied","Data":"14c110c2aafcebf401f14c4e8482618b6d3c8697a12a7383624870029d5a39de"} Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.629714 5103 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.630369 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049" gracePeriod=15 Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.630434 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6" gracePeriod=15 Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.630569 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e" gracePeriod=15 Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.630596 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7db8fb50f766d64858fb9c23c921f7327de27610f6bcaf84791914b161dde1c5" gracePeriod=15 Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.630647 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be" gracePeriod=15 Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631603 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631623 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631640 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="extract-content" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631646 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="extract-content" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631655 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631661 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631670 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631676 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631687 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631694 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631700 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631706 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631714 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="extract-content" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631719 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="extract-content" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631726 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="registry-server" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631731 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="registry-server" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631737 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="extract-utilities" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631744 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="extract-utilities" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631753 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631758 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631765 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="registry-server" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631770 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="registry-server" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631782 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631820 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631829 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631834 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631843 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631848 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631853 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631858 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631865 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="extract-utilities" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631870 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="extract-utilities" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631967 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="registry-server" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631983 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631990 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632000 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632009 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632019 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632026 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632032 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="registry-server" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632040 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632060 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632067 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.662257 5103 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.675025 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.694455 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.694517 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.694550 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.694735 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.695604 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797656 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797736 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797793 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797812 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797870 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797916 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797935 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797974 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797976 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.798013 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.987575 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.642374 5103 scope.go:117] "RemoveContainer" containerID="ed2bfc2b73398c1889d5a77eddd6e0ef71fb44a17294819baf27f50345e4955f" Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.775555 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.775656 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.839212 5103 scope.go:117] "RemoveContainer" containerID="e6a329d39509762784caccc32b4323411f00c0a9bfd035635c251413ddb2d332" Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.857362 5103 scope.go:117] "RemoveContainer" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" Jan 30 00:14:02 crc kubenswrapper[5103]: E0130 00:14:02.859165 5103 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f59eaa22b02be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:14:02.857841342 +0000 UTC m=+232.729339394,LastTimestamp:2026-01-30 00:14:02.857841342 +0000 UTC m=+232.729339394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.872361 5103 scope.go:117] "RemoveContainer" containerID="abfd0b471685a353ec69c6889c8ef870bc8a246f713d8c033c6ef4c6cea8cbc2" Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.888513 5103 scope.go:117] "RemoveContainer" containerID="487aacb9ba75fd28f520f3d4a32a82a1b33516035610efccfd2d8baacd805ff1" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.116451 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.128746 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpqb7"] Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.128802 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpqb7"] Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.129965 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.131557 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.132500 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be" exitCode=2 Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.134536 5103 generic.go:358] "Generic (PLEG): container finished" podID="36d0743a-ddce-4bd2-8cca-44d42d9356da" containerID="7d8a9b02754e84af20022228fe1cad64203a3b40b3a8196d40d777d92317e4f3" exitCode=0 Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.220488 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.220560 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.220588 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.220613 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.220912 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.306197 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.306796 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322330 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322383 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322401 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322425 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322472 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322488 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322535 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322546 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.323154 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.323169 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.423686 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2gvx\" (UniqueName: \"kubernetes.io/projected/c5938973-a6f9-4d60-b605-3f02b2c1c84f-kube-api-access-t2gvx\") pod \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.424106 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca\") pod \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.425001 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca" (OuterVolumeSpecName: "serviceca") pod "c5938973-a6f9-4d60-b605-3f02b2c1c84f" (UID: "c5938973-a6f9-4d60-b605-3f02b2c1c84f"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.429834 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5938973-a6f9-4d60-b605-3f02b2c1c84f-kube-api-access-t2gvx" (OuterVolumeSpecName: "kube-api-access-t2gvx") pod "c5938973-a6f9-4d60-b605-3f02b2c1c84f" (UID: "c5938973-a6f9-4d60-b605-3f02b2c1c84f"). InnerVolumeSpecName "kube-api-access-t2gvx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.526100 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t2gvx\" (UniqueName: \"kubernetes.io/projected/c5938973-a6f9-4d60-b605-3f02b2c1c84f-kube-api-access-t2gvx\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.526182 5103 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.867856 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"8e3ac715b91fddae359b350cd88496ad1a437748990a5e54da482342c811ef9d"} Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.867928 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"36d0743a-ddce-4bd2-8cca-44d42d9356da","Type":"ContainerDied","Data":"7d8a9b02754e84af20022228fe1cad64203a3b40b3a8196d40d777d92317e4f3"} Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.867999 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.868026 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bhpd7"] Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.868108 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bhpd7"] Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.870464 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.870517 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.870742 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"59c82ec1806032dbe373182f75befd328452c9166c192bca363da6c02d99e1c0"} pod="openshift-console/downloads-747b44746d-j77tr" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.870795 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" containerID="cri-o://59c82ec1806032dbe373182f75befd328452c9166c192bca363da6c02d99e1c0" gracePeriod=2 Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.871933 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.872366 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.872697 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.143987 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.145989 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.146682 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7db8fb50f766d64858fb9c23c921f7327de27610f6bcaf84791914b161dde1c5" exitCode=0 Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.146704 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6" exitCode=0 Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.146712 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e" exitCode=0 Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.146763 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.149771 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.150608 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-x6t57" event={"ID":"c5938973-a6f9-4d60-b605-3f02b2c1c84f","Type":"ContainerDied","Data":"f110469e2ef62c0b54ea25d9e9c5273b55bbc9a77eb25e1ad48e65441633b3d0"} Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.150646 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f110469e2ef62c0b54ea25d9e9c5273b55bbc9a77eb25e1ad48e65441633b3d0" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.174021 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.174624 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.174804 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.430898 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.431888 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.432403 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.432582 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540492 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36d0743a-ddce-4bd2-8cca-44d42d9356da-kube-api-access\") pod \"36d0743a-ddce-4bd2-8cca-44d42d9356da\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540596 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-kubelet-dir\") pod \"36d0743a-ddce-4bd2-8cca-44d42d9356da\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540627 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-var-lock\") pod \"36d0743a-ddce-4bd2-8cca-44d42d9356da\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540657 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "36d0743a-ddce-4bd2-8cca-44d42d9356da" (UID: "36d0743a-ddce-4bd2-8cca-44d42d9356da"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540744 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-var-lock" (OuterVolumeSpecName: "var-lock") pod "36d0743a-ddce-4bd2-8cca-44d42d9356da" (UID: "36d0743a-ddce-4bd2-8cca-44d42d9356da"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540863 5103 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540874 5103 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.549190 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d0743a-ddce-4bd2-8cca-44d42d9356da-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "36d0743a-ddce-4bd2-8cca-44d42d9356da" (UID: "36d0743a-ddce-4bd2-8cca-44d42d9356da"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.641837 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36d0743a-ddce-4bd2-8cca-44d42d9356da-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.874543 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="096edab0-9031-4bcd-8451-a93417372ee1" path="/var/lib/kubelet/pods/096edab0-9031-4bcd-8451-a93417372ee1/volumes" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.875479 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" path="/var/lib/kubelet/pods/3d4d4fce-00ed-4163-8a52-864aa4d324e6/volumes" Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.155973 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"36d0743a-ddce-4bd2-8cca-44d42d9356da","Type":"ContainerDied","Data":"d26e5dfec08469fb58fbea2e743d80f53ef5ef562fb067297ab9ef35b80c7464"} Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.156017 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d26e5dfec08469fb58fbea2e743d80f53ef5ef562fb067297ab9ef35b80c7464" Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.156142 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.157962 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7"} Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.159707 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.159835 5103 generic.go:358] "Generic (PLEG): container finished" podID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerID="59c82ec1806032dbe373182f75befd328452c9166c192bca363da6c02d99e1c0" exitCode=0 Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.159893 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerDied","Data":"59c82ec1806032dbe373182f75befd328452c9166c192bca363da6c02d99e1c0"} Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.160258 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.160576 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.270980 5103 scope.go:117] "RemoveContainer" containerID="fb88c4f3139f356c291358cb8aa4fa4cf78be8f2c5f1ebfdbfda9547ff3a84f8" Jan 30 00:14:06 crc kubenswrapper[5103]: I0130 00:14:06.168324 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:14:06 crc kubenswrapper[5103]: I0130 00:14:06.171324 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049" exitCode=0 Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.432152 5103 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.432651 5103 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.432960 5103 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.433237 5103 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.433489 5103 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:06 crc kubenswrapper[5103]: I0130 00:14:06.433516 5103 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.433805 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="200ms" Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.634873 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="400ms" Jan 30 00:14:07 crc kubenswrapper[5103]: E0130 00:14:07.035975 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="800ms" Jan 30 00:14:07 crc kubenswrapper[5103]: I0130 00:14:07.177185 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:07 crc kubenswrapper[5103]: I0130 00:14:07.178747 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:07 crc kubenswrapper[5103]: I0130 00:14:07.179558 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:07 crc kubenswrapper[5103]: E0130 00:14:07.837196 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="1.6s" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.019277 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.028165 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.028909 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.029241 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.029501 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.029733 5103 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.102668 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103078 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103235 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103276 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103320 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103429 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103438 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103494 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103500 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103621 5103 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103637 5103 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103650 5103 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103662 5103 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.108321 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.196030 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.197617 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.197634 5103 scope.go:117] "RemoveContainer" containerID="7db8fb50f766d64858fb9c23c921f7327de27610f6bcaf84791914b161dde1c5" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.205775 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.219514 5103 scope.go:117] "RemoveContainer" containerID="ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.226373 5103 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.226863 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.227273 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.227702 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.245661 5103 scope.go:117] "RemoveContainer" containerID="8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.270806 5103 scope.go:117] "RemoveContainer" containerID="bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.294014 5103 scope.go:117] "RemoveContainer" containerID="b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.311586 5103 scope.go:117] "RemoveContainer" containerID="f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2" Jan 30 00:14:09 crc kubenswrapper[5103]: E0130 00:14:09.438748 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="3.2s" Jan 30 00:14:10 crc kubenswrapper[5103]: I0130 00:14:10.875266 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:10 crc kubenswrapper[5103]: I0130 00:14:10.876138 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:10 crc kubenswrapper[5103]: I0130 00:14:10.876408 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:10 crc kubenswrapper[5103]: I0130 00:14:10.876876 5103 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:10 crc kubenswrapper[5103]: I0130 00:14:10.882800 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.219098 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerStarted","Data":"64472619026cbbe379251178003f955b4bb2a1307cb8e228ed55293d739ed29b"} Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.219466 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.220083 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.220188 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.220552 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.221473 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.222291 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.222825 5103 status_manager.go:895] "Failed to get status for pod" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" pod="openshift-console/downloads-747b44746d-j77tr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-j77tr\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:11 crc kubenswrapper[5103]: E0130 00:14:11.230508 5103 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f59eaa22b02be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:14:02.857841342 +0000 UTC m=+232.729339394,LastTimestamp:2026-01-30 00:14:02.857841342 +0000 UTC m=+232.729339394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:14:12 crc kubenswrapper[5103]: I0130 00:14:12.226563 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:14:12 crc kubenswrapper[5103]: I0130 00:14:12.226975 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:14:12 crc kubenswrapper[5103]: E0130 00:14:12.639978 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="6.4s" Jan 30 00:14:12 crc kubenswrapper[5103]: I0130 00:14:12.775522 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:14:12 crc kubenswrapper[5103]: I0130 00:14:12.775983 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.252372 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.253279 5103 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9" exitCode=1 Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.253555 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9"} Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.254325 5103 scope.go:117] "RemoveContainer" containerID="b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.255515 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.256472 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.257009 5103 status_manager.go:895] "Failed to get status for pod" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" pod="openshift-console/downloads-747b44746d-j77tr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-j77tr\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.257308 5103 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.257584 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.263194 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.263733 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ae3e0e08ed67c62331b46cf2074f1b215dcca7fcb0af2347d9529fb8ff3ab82e"} Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.265941 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.266654 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.266944 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.267242 5103 status_manager.go:895] "Failed to get status for pod" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" pod="openshift-console/downloads-747b44746d-j77tr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-j77tr\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.267515 5103 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.968639 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.969531 5103 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.969627 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.867735 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.869590 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.870232 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.870716 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.871090 5103 status_manager.go:895] "Failed to get status for pod" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" pod="openshift-console/downloads-747b44746d-j77tr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-j77tr\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.871343 5103 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.894360 5103 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.894540 5103 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:18 crc kubenswrapper[5103]: E0130 00:14:18.895815 5103 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.896166 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:18 crc kubenswrapper[5103]: W0130 00:14:18.933703 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-d1a5e3f028a3221d25f342a9bd39773b7ea726fd084bb32f4f610d07eadcb456 WatchSource:0}: Error finding container d1a5e3f028a3221d25f342a9bd39773b7ea726fd084bb32f4f610d07eadcb456: Status 404 returned error can't find the container with id d1a5e3f028a3221d25f342a9bd39773b7ea726fd084bb32f4f610d07eadcb456 Jan 30 00:14:19 crc kubenswrapper[5103]: E0130 00:14:19.041403 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="7s" Jan 30 00:14:19 crc kubenswrapper[5103]: I0130 00:14:19.277009 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d1a5e3f028a3221d25f342a9bd39773b7ea726fd084bb32f4f610d07eadcb456"} Jan 30 00:14:20 crc kubenswrapper[5103]: I0130 00:14:20.874267 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:20 crc kubenswrapper[5103]: I0130 00:14:20.874461 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:20 crc kubenswrapper[5103]: I0130 00:14:20.874601 5103 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:20 crc kubenswrapper[5103]: I0130 00:14:20.874752 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:20 crc kubenswrapper[5103]: I0130 00:14:20.874900 5103 status_manager.go:895] "Failed to get status for pod" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" pod="openshift-console/downloads-747b44746d-j77tr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-j77tr\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:20 crc kubenswrapper[5103]: I0130 00:14:20.876274 5103 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: E0130 00:14:21.232102 5103 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f59eaa22b02be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:14:02.857841342 +0000 UTC m=+232.729339394,LastTimestamp:2026-01-30 00:14:02.857841342 +0000 UTC m=+232.729339394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.315932 5103 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="7dadfa91d753626cf3d7e8b197d0f960f5f2ec28a1a89374b78494a4c475e0ae" exitCode=0 Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.316263 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"7dadfa91d753626cf3d7e8b197d0f960f5f2ec28a1a89374b78494a4c475e0ae"} Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.316978 5103 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.317212 5103 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.317426 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.317641 5103 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.317851 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.318067 5103 status_manager.go:895] "Failed to get status for pod" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" pod="openshift-console/downloads-747b44746d-j77tr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-j77tr\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.318277 5103 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.318444 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: E0130 00:14:21.318704 5103 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.465239 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:22 crc kubenswrapper[5103]: I0130 00:14:22.234370 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:14:22 crc kubenswrapper[5103]: I0130 00:14:22.331220 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1fb3571660e971b359d4c340d7be1878c55ae50327a9e7819b8f25b365fbe66b"} Jan 30 00:14:22 crc kubenswrapper[5103]: I0130 00:14:22.331555 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"954b9125e1be33a5e6eb4f89b7a006597732e602e95c91c596117e1751526b2f"} Jan 30 00:14:22 crc kubenswrapper[5103]: I0130 00:14:22.331567 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"7796ff097def23d28226f770ba3c77a19d674857edfe45de2559f8735742b4fc"} Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.339273 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d72f5041a0dc8f1ae006d43bcc632dd103a9b31a2c2af22496cbdc44ca692d27"} Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.339646 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"bad432e1280ca0bc9081f7baeb83400a8df530fd4427cc1249e99d70a3beed7c"} Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.339949 5103 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.339964 5103 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.340231 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.896737 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.896791 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.902413 5103 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]log ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]etcd ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-filter ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-apiextensions-informers ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-apiextensions-controllers ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/crd-informer-synced ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-system-namespaces-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 30 00:14:23 crc kubenswrapper[5103]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 30 00:14:23 crc kubenswrapper[5103]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/bootstrap-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-kube-aggregator-informers ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-registration-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-discovery-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]autoregister-completion ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-openapi-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: livez check failed Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.902484 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57755cc5f99000cc11e193051474d4e2" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:27 crc kubenswrapper[5103]: I0130 00:14:27.969945 5103 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 00:14:27 crc kubenswrapper[5103]: I0130 00:14:27.970701 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.349373 5103 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.349407 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.369069 5103 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.369101 5103 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.493401 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.493472 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.902734 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.906484 5103 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="aac8e230-ac35-4811-a9b5-f24f8f62bb06" Jan 30 00:14:29 crc kubenswrapper[5103]: I0130 00:14:29.373965 5103 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:29 crc kubenswrapper[5103]: I0130 00:14:29.373995 5103 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:30 crc kubenswrapper[5103]: I0130 00:14:30.898603 5103 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="aac8e230-ac35-4811-a9b5-f24f8f62bb06" Jan 30 00:14:37 crc kubenswrapper[5103]: I0130 00:14:37.969487 5103 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 00:14:37 crc kubenswrapper[5103]: I0130 00:14:37.969913 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 00:14:37 crc kubenswrapper[5103]: I0130 00:14:37.969972 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:37 crc kubenswrapper[5103]: I0130 00:14:37.970956 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"ae3e0e08ed67c62331b46cf2074f1b215dcca7fcb0af2347d9529fb8ff3ab82e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 30 00:14:37 crc kubenswrapper[5103]: I0130 00:14:37.971094 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://ae3e0e08ed67c62331b46cf2074f1b215dcca7fcb0af2347d9529fb8ff3ab82e" gracePeriod=30 Jan 30 00:14:38 crc kubenswrapper[5103]: I0130 00:14:38.755032 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 30 00:14:38 crc kubenswrapper[5103]: I0130 00:14:38.853099 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.127517 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.233738 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.716718 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.885212 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.889620 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.984648 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.996774 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 30 00:14:40 crc kubenswrapper[5103]: I0130 00:14:40.072603 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 30 00:14:40 crc kubenswrapper[5103]: I0130 00:14:40.091318 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.043269 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.091110 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.236857 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.328872 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.398374 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.532571 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.558823 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.697345 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.851813 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.156965 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.249023 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.421019 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.549958 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.732790 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.872578 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.947811 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.994986 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.053427 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.062547 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.181544 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.232023 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.327880 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.434077 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.488485 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.530359 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.622784 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.717239 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.770738 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.805396 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.826894 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.042276 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.142278 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.245134 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.617405 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.713301 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.775799 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.929340 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.980097 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.261830 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.423310 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.429252 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.497157 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.622183 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.652924 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.793333 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.897373 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.020181 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.116725 5103 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.159648 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.216872 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.431748 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.461864 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.530636 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.594721 5103 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.596442 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.596414389 podStartE2EDuration="45.596414389s" podCreationTimestamp="2026-01-30 00:14:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:28.237810024 +0000 UTC m=+258.109308086" watchObservedRunningTime="2026-01-30 00:14:46.596414389 +0000 UTC m=+276.467912471" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.602496 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.602569 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.608146 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.609777 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.626960 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.626943364 podStartE2EDuration="18.626943364s" podCreationTimestamp="2026-01-30 00:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:46.622953267 +0000 UTC m=+276.494451329" watchObservedRunningTime="2026-01-30 00:14:46.626943364 +0000 UTC m=+276.498441416" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.659222 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.803660 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.974400 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.006383 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.125514 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.300555 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.300555 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.321665 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.329830 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.401239 5103 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.425823 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.537255 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.681493 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.865595 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.156499 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.192040 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.334685 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.362453 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.377782 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.530955 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.587362 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.047607 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.050122 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.170083 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.173889 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.228674 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.267593 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.308551 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.335107 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.346469 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.394133 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.595750 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.615607 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.675529 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.694316 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.736345 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.820792 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.899508 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.991134 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.137359 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.150917 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.244409 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.386400 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.419410 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.423140 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.452660 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.494899 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.749634 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.782277 5103 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.782550 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7" gracePeriod=5 Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.905857 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.928475 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.212585 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.244939 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.264567 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.383680 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.422441 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.462317 5103 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.463568 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.479295 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.678967 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.719390 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.137218 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.142486 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.225779 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.314288 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.352468 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.543777 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.646868 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.714757 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.740813 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.761667 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 30 00:14:53 crc kubenswrapper[5103]: I0130 00:14:53.045299 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 30 00:14:53 crc kubenswrapper[5103]: I0130 00:14:53.112321 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:14:53 crc kubenswrapper[5103]: I0130 00:14:53.183785 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:53 crc kubenswrapper[5103]: I0130 00:14:53.284585 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 30 00:14:53 crc kubenswrapper[5103]: I0130 00:14:53.365301 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 30 00:14:53 crc kubenswrapper[5103]: I0130 00:14:53.691041 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 30 00:14:54 crc kubenswrapper[5103]: I0130 00:14:54.006568 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 30 00:14:54 crc kubenswrapper[5103]: I0130 00:14:54.058757 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 30 00:14:54 crc kubenswrapper[5103]: I0130 00:14:54.081443 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 30 00:14:54 crc kubenswrapper[5103]: I0130 00:14:54.423719 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 30 00:14:54 crc kubenswrapper[5103]: I0130 00:14:54.687140 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:54 crc kubenswrapper[5103]: I0130 00:14:54.780416 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.398265 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.398399 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.542884 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.542997 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543189 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543247 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543304 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543808 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543890 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543936 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543986 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.560706 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.572504 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.572582 5103 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7" exitCode=137 Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.572792 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.572873 5103 scope.go:117] "RemoveContainer" containerID="b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.627557 5103 scope.go:117] "RemoveContainer" containerID="b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7" Jan 30 00:14:56 crc kubenswrapper[5103]: E0130 00:14:56.628409 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7\": container with ID starting with b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7 not found: ID does not exist" containerID="b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.628482 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7"} err="failed to get container status \"b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7\": rpc error: code = NotFound desc = could not find container \"b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7\": container with ID starting with b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7 not found: ID does not exist" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.645236 5103 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.645265 5103 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.645278 5103 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.645289 5103 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.645300 5103 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.876481 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.876808 5103 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.892614 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.892713 5103 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bbd98a8a-8e00-459e-9b14-f5fbde204275" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.900218 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.900265 5103 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bbd98a8a-8e00-459e-9b14-f5fbde204275" Jan 30 00:14:58 crc kubenswrapper[5103]: I0130 00:14:58.493737 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:14:58 crc kubenswrapper[5103]: I0130 00:14:58.494247 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:14:58 crc kubenswrapper[5103]: I0130 00:14:58.494339 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:14:58 crc kubenswrapper[5103]: I0130 00:14:58.495333 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47d4649f628f9ff08c1eae857ce8b6a70f66ec474c9229aafcc4d26442b014bb"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:14:58 crc kubenswrapper[5103]: I0130 00:14:58.495474 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://47d4649f628f9ff08c1eae857ce8b6a70f66ec474c9229aafcc4d26442b014bb" gracePeriod=600 Jan 30 00:14:58 crc kubenswrapper[5103]: I0130 00:14:58.911704 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 30 00:14:59 crc kubenswrapper[5103]: I0130 00:14:59.598481 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="47d4649f628f9ff08c1eae857ce8b6a70f66ec474c9229aafcc4d26442b014bb" exitCode=0 Jan 30 00:14:59 crc kubenswrapper[5103]: I0130 00:14:59.598598 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"47d4649f628f9ff08c1eae857ce8b6a70f66ec474c9229aafcc4d26442b014bb"} Jan 30 00:14:59 crc kubenswrapper[5103]: I0130 00:14:59.599202 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"346d68dc943f95b7c3635e3ca8c695bae2c81b93ca2769fe09d08ce315c33590"} Jan 30 00:15:03 crc kubenswrapper[5103]: I0130 00:15:03.698168 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 30 00:15:05 crc kubenswrapper[5103]: I0130 00:15:05.022676 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:15:05 crc kubenswrapper[5103]: I0130 00:15:05.669681 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 30 00:15:05 crc kubenswrapper[5103]: I0130 00:15:05.797528 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 30 00:15:05 crc kubenswrapper[5103]: I0130 00:15:05.901143 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 30 00:15:06 crc kubenswrapper[5103]: I0130 00:15:06.131535 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 30 00:15:06 crc kubenswrapper[5103]: I0130 00:15:06.874802 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 30 00:15:07 crc kubenswrapper[5103]: I0130 00:15:07.124667 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 30 00:15:07 crc kubenswrapper[5103]: I0130 00:15:07.206086 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 30 00:15:07 crc kubenswrapper[5103]: I0130 00:15:07.869700 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.658733 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.660564 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.660645 5103 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="ae3e0e08ed67c62331b46cf2074f1b215dcca7fcb0af2347d9529fb8ff3ab82e" exitCode=137 Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.660730 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"ae3e0e08ed67c62331b46cf2074f1b215dcca7fcb0af2347d9529fb8ff3ab82e"} Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.660779 5103 scope.go:117] "RemoveContainer" containerID="b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9" Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.746649 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.978229 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 30 00:15:09 crc kubenswrapper[5103]: I0130 00:15:09.062697 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 30 00:15:09 crc kubenswrapper[5103]: I0130 00:15:09.253632 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 30 00:15:09 crc kubenswrapper[5103]: I0130 00:15:09.672663 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:15:09 crc kubenswrapper[5103]: I0130 00:15:09.675587 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6951b3d44456fd644dddb08caa9fe5616204189d4cb5d7fcafe82ceb45b4bc6a"} Jan 30 00:15:09 crc kubenswrapper[5103]: I0130 00:15:09.678379 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 30 00:15:10 crc kubenswrapper[5103]: I0130 00:15:10.083553 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 30 00:15:10 crc kubenswrapper[5103]: I0130 00:15:10.901028 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.081193 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.085970 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.187941 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.253740 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.464927 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.549601 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.644385 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.974831 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 30 00:15:12 crc kubenswrapper[5103]: I0130 00:15:12.161853 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 30 00:15:12 crc kubenswrapper[5103]: I0130 00:15:12.594846 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 30 00:15:12 crc kubenswrapper[5103]: I0130 00:15:12.643163 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 30 00:15:12 crc kubenswrapper[5103]: I0130 00:15:12.736213 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:12 crc kubenswrapper[5103]: I0130 00:15:12.744526 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 30 00:15:13 crc kubenswrapper[5103]: I0130 00:15:13.164994 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 30 00:15:13 crc kubenswrapper[5103]: I0130 00:15:13.268310 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:15:13 crc kubenswrapper[5103]: I0130 00:15:13.632921 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 30 00:15:14 crc kubenswrapper[5103]: I0130 00:15:14.323033 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 30 00:15:14 crc kubenswrapper[5103]: I0130 00:15:14.684804 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 30 00:15:14 crc kubenswrapper[5103]: I0130 00:15:14.716247 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 30 00:15:15 crc kubenswrapper[5103]: I0130 00:15:15.681615 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 30 00:15:15 crc kubenswrapper[5103]: I0130 00:15:15.791532 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 30 00:15:16 crc kubenswrapper[5103]: I0130 00:15:16.430309 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:15:16 crc kubenswrapper[5103]: I0130 00:15:16.594397 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.595969 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.704293 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.741957 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.744211 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.793737 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.969186 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.974436 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:15:18 crc kubenswrapper[5103]: I0130 00:15:18.016955 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:18 crc kubenswrapper[5103]: I0130 00:15:18.200237 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 30 00:15:18 crc kubenswrapper[5103]: I0130 00:15:18.426291 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:15:18 crc kubenswrapper[5103]: I0130 00:15:18.745829 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:15:19 crc kubenswrapper[5103]: I0130 00:15:19.075711 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:19 crc kubenswrapper[5103]: I0130 00:15:19.316870 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:15:19 crc kubenswrapper[5103]: I0130 00:15:19.653423 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 30 00:15:19 crc kubenswrapper[5103]: I0130 00:15:19.993285 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 30 00:15:20 crc kubenswrapper[5103]: I0130 00:15:20.157741 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 30 00:15:20 crc kubenswrapper[5103]: I0130 00:15:20.226537 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 30 00:15:20 crc kubenswrapper[5103]: I0130 00:15:20.423791 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 30 00:15:20 crc kubenswrapper[5103]: I0130 00:15:20.912391 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.162268 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.282794 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.317410 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.749565 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.764562 5103 generic.go:358] "Generic (PLEG): container finished" podID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerID="9d8321d26701e84b2172ecd6b861bb6b29cc5de963380d89851b5cc503a53bec" exitCode=0 Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.764603 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerDied","Data":"9d8321d26701e84b2172ecd6b861bb6b29cc5de963380d89851b5cc503a53bec"} Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.765422 5103 scope.go:117] "RemoveContainer" containerID="9d8321d26701e84b2172ecd6b861bb6b29cc5de963380d89851b5cc503a53bec" Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.769190 5103 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.853185 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.070313 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.196563 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.325220 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.461306 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.603615 5103 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.772794 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-mf247_b15f695a-0fc1-4ab5-aad2-341f3bf6822d/marketplace-operator/1.log" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.773853 5103 generic.go:358] "Generic (PLEG): container finished" podID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" exitCode=1 Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.773930 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerDied","Data":"3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0"} Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.774080 5103 scope.go:117] "RemoveContainer" containerID="9d8321d26701e84b2172ecd6b861bb6b29cc5de963380d89851b5cc503a53bec" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.774987 5103 scope.go:117] "RemoveContainer" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:15:22 crc kubenswrapper[5103]: E0130 00:15:22.775979 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-mf247_openshift-marketplace(b15f695a-0fc1-4ab5-aad2-341f3bf6822d)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.044185 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.172610 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.431616 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.595512 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.672672 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.784414 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-mf247_b15f695a-0fc1-4ab5-aad2-341f3bf6822d/marketplace-operator/1.log" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.912525 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.941978 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:15:24 crc kubenswrapper[5103]: I0130 00:15:24.002693 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 30 00:15:24 crc kubenswrapper[5103]: I0130 00:15:24.183043 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 30 00:15:24 crc kubenswrapper[5103]: I0130 00:15:24.365240 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 30 00:15:24 crc kubenswrapper[5103]: I0130 00:15:24.645538 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 30 00:15:24 crc kubenswrapper[5103]: I0130 00:15:24.729604 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 30 00:15:25 crc kubenswrapper[5103]: I0130 00:15:25.196034 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 30 00:15:25 crc kubenswrapper[5103]: I0130 00:15:25.644637 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 30 00:15:25 crc kubenswrapper[5103]: I0130 00:15:25.954293 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.010211 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.393421 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.589623 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.617708 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.644229 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.646673 5103 scope.go:117] "RemoveContainer" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:15:26 crc kubenswrapper[5103]: E0130 00:15:26.647172 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-mf247_openshift-marketplace(b15f695a-0fc1-4ab5-aad2-341f3bf6822d)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.733396 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.187418 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.391262 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz"] Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392373 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" containerName="installer" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392399 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" containerName="installer" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392413 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" containerName="image-pruner" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392421 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" containerName="image-pruner" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392467 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392476 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392593 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" containerName="installer" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392607 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" containerName="image-pruner" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392618 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.402677 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz"] Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.402839 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.404751 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.404828 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.500387 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/325216c1-422b-4a9d-ab9b-fcc433fe43b8-secret-volume\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.500599 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/325216c1-422b-4a9d-ab9b-fcc433fe43b8-config-volume\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.500675 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpnlp\" (UniqueName: \"kubernetes.io/projected/325216c1-422b-4a9d-ab9b-fcc433fe43b8-kube-api-access-wpnlp\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.601940 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/325216c1-422b-4a9d-ab9b-fcc433fe43b8-config-volume\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.602017 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wpnlp\" (UniqueName: \"kubernetes.io/projected/325216c1-422b-4a9d-ab9b-fcc433fe43b8-kube-api-access-wpnlp\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.602138 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/325216c1-422b-4a9d-ab9b-fcc433fe43b8-secret-volume\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.602828 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/325216c1-422b-4a9d-ab9b-fcc433fe43b8-config-volume\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.609246 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/325216c1-422b-4a9d-ab9b-fcc433fe43b8-secret-volume\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.618933 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpnlp\" (UniqueName: \"kubernetes.io/projected/325216c1-422b-4a9d-ab9b-fcc433fe43b8-kube-api-access-wpnlp\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.723341 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.090720 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.195586 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz"] Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.200734 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.345883 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.347017 5103 scope.go:117] "RemoveContainer" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:15:28 crc kubenswrapper[5103]: E0130 00:15:28.347552 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-mf247_openshift-marketplace(b15f695a-0fc1-4ab5-aad2-341f3bf6822d)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.743428 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.823027 5103 generic.go:358] "Generic (PLEG): container finished" podID="325216c1-422b-4a9d-ab9b-fcc433fe43b8" containerID="6c5ceedcb3e34d36eeae0f5ae68862363cc7dc5fe8f4f10ce0e542de91be2cc6" exitCode=0 Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.823160 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" event={"ID":"325216c1-422b-4a9d-ab9b-fcc433fe43b8","Type":"ContainerDied","Data":"6c5ceedcb3e34d36eeae0f5ae68862363cc7dc5fe8f4f10ce0e542de91be2cc6"} Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.823451 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" event={"ID":"325216c1-422b-4a9d-ab9b-fcc433fe43b8","Type":"ContainerStarted","Data":"d101796d8c8dc1502643088fc37363d125d3aea5b84c5917aafbf53dfee80956"} Jan 30 00:15:29 crc kubenswrapper[5103]: I0130 00:15:29.133302 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 30 00:15:29 crc kubenswrapper[5103]: I0130 00:15:29.388155 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 30 00:15:29 crc kubenswrapper[5103]: I0130 00:15:29.667611 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 30 00:15:29 crc kubenswrapper[5103]: I0130 00:15:29.993595 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.039864 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.053630 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.089088 5103 ???:1] "http: TLS handshake error from 192.168.126.11:38444: no serving certificate available for the kubelet" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.132387 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/325216c1-422b-4a9d-ab9b-fcc433fe43b8-secret-volume\") pod \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.132437 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpnlp\" (UniqueName: \"kubernetes.io/projected/325216c1-422b-4a9d-ab9b-fcc433fe43b8-kube-api-access-wpnlp\") pod \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.132478 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/325216c1-422b-4a9d-ab9b-fcc433fe43b8-config-volume\") pod \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.133082 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/325216c1-422b-4a9d-ab9b-fcc433fe43b8-config-volume" (OuterVolumeSpecName: "config-volume") pod "325216c1-422b-4a9d-ab9b-fcc433fe43b8" (UID: "325216c1-422b-4a9d-ab9b-fcc433fe43b8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.139934 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/325216c1-422b-4a9d-ab9b-fcc433fe43b8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "325216c1-422b-4a9d-ab9b-fcc433fe43b8" (UID: "325216c1-422b-4a9d-ab9b-fcc433fe43b8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.140459 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/325216c1-422b-4a9d-ab9b-fcc433fe43b8-kube-api-access-wpnlp" (OuterVolumeSpecName: "kube-api-access-wpnlp") pod "325216c1-422b-4a9d-ab9b-fcc433fe43b8" (UID: "325216c1-422b-4a9d-ab9b-fcc433fe43b8"). InnerVolumeSpecName "kube-api-access-wpnlp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.233931 5103 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/325216c1-422b-4a9d-ab9b-fcc433fe43b8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.233969 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wpnlp\" (UniqueName: \"kubernetes.io/projected/325216c1-422b-4a9d-ab9b-fcc433fe43b8-kube-api-access-wpnlp\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.233977 5103 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/325216c1-422b-4a9d-ab9b-fcc433fe43b8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.706070 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.740151 5103 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.835934 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" event={"ID":"325216c1-422b-4a9d-ab9b-fcc433fe43b8","Type":"ContainerDied","Data":"d101796d8c8dc1502643088fc37363d125d3aea5b84c5917aafbf53dfee80956"} Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.836333 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d101796d8c8dc1502643088fc37363d125d3aea5b84c5917aafbf53dfee80956" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.835980 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:31 crc kubenswrapper[5103]: I0130 00:15:31.002125 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 30 00:15:31 crc kubenswrapper[5103]: I0130 00:15:31.443908 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:15:31 crc kubenswrapper[5103]: I0130 00:15:31.607533 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 30 00:15:32 crc kubenswrapper[5103]: I0130 00:15:32.201585 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 30 00:15:32 crc kubenswrapper[5103]: I0130 00:15:32.275379 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 30 00:15:32 crc kubenswrapper[5103]: I0130 00:15:32.400849 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 30 00:15:32 crc kubenswrapper[5103]: I0130 00:15:32.401156 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 30 00:15:33 crc kubenswrapper[5103]: I0130 00:15:33.190174 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 30 00:15:33 crc kubenswrapper[5103]: I0130 00:15:33.473493 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 30 00:15:34 crc kubenswrapper[5103]: I0130 00:15:34.610433 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 30 00:15:34 crc kubenswrapper[5103]: I0130 00:15:34.823517 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 30 00:15:34 crc kubenswrapper[5103]: I0130 00:15:34.941429 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:15:42 crc kubenswrapper[5103]: I0130 00:15:42.868147 5103 scope.go:117] "RemoveContainer" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:15:43 crc kubenswrapper[5103]: I0130 00:15:43.939387 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-mf247_b15f695a-0fc1-4ab5-aad2-341f3bf6822d/marketplace-operator/1.log" Jan 30 00:15:43 crc kubenswrapper[5103]: I0130 00:15:43.939816 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerStarted","Data":"bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1"} Jan 30 00:15:43 crc kubenswrapper[5103]: I0130 00:15:43.940301 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:15:43 crc kubenswrapper[5103]: I0130 00:15:43.944147 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:16:21 crc kubenswrapper[5103]: I0130 00:16:21.641670 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-spmxr"] Jan 30 00:16:21 crc kubenswrapper[5103]: I0130 00:16:21.642554 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" containerID="cri-o://8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88" gracePeriod=30 Jan 30 00:16:21 crc kubenswrapper[5103]: I0130 00:16:21.659425 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm"] Jan 30 00:16:21 crc kubenswrapper[5103]: I0130 00:16:21.660107 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" containerID="cri-o://712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9" gracePeriod=30 Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.064490 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.073976 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.093236 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57969d489d-xkzdh"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.093939 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="325216c1-422b-4a9d-ab9b-fcc433fe43b8" containerName="collect-profiles" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.093966 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="325216c1-422b-4a9d-ab9b-fcc433fe43b8" containerName="collect-profiles" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.093990 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.093999 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.094014 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.094024 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.094152 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.094169 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.094178 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="325216c1-422b-4a9d-ab9b-fcc433fe43b8" containerName="collect-profiles" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.100252 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.102092 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57969d489d-xkzdh"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.113931 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.119535 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.128897 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.194426 5103 generic.go:358] "Generic (PLEG): container finished" podID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerID="8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88" exitCode=0 Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.194509 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.194580 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" event={"ID":"d3abf3af-b96a-44fa-bd40-1c92bab19b92","Type":"ContainerDied","Data":"8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88"} Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.194629 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" event={"ID":"d3abf3af-b96a-44fa-bd40-1c92bab19b92","Type":"ContainerDied","Data":"f01ae49c3dbf6ce1c41262f39b1cfb6c8326085cddd7aa8f645756c56fc66e24"} Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.194658 5103 scope.go:117] "RemoveContainer" containerID="8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.196039 5103 generic.go:358] "Generic (PLEG): container finished" podID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerID="712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9" exitCode=0 Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.196181 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" event={"ID":"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204","Type":"ContainerDied","Data":"712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9"} Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.196205 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" event={"ID":"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204","Type":"ContainerDied","Data":"9131b9500cdfd415e7ec77b417734cc2ba2d9446de26cd67b54fba245814badb"} Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.196260 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205231 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-serving-cert\") pod \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205278 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3abf3af-b96a-44fa-bd40-1c92bab19b92-tmp\") pod \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205326 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-config\") pod \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205367 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fxzw\" (UniqueName: \"kubernetes.io/projected/d3abf3af-b96a-44fa-bd40-1c92bab19b92-kube-api-access-4fxzw\") pod \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205386 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-client-ca\") pod \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205403 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abf3af-b96a-44fa-bd40-1c92bab19b92-serving-cert\") pod \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205417 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgvv5\" (UniqueName: \"kubernetes.io/projected/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-kube-api-access-qgvv5\") pod \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205438 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-client-ca\") pod \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205468 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-config\") pod \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205502 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-tmp\") pod \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205560 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-proxy-ca-bundles\") pod \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205652 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-config\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205689 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-config\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205713 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ad9bcf-7352-426a-8c3a-94904bd8616c-serving-cert\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205741 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-client-ca\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205758 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76r7h\" (UniqueName: \"kubernetes.io/projected/158c1d70-030a-44de-b9af-51dafc4857f5-kube-api-access-76r7h\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205797 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-proxy-ca-bundles\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205815 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/158c1d70-030a-44de-b9af-51dafc4857f5-tmp\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205837 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9bcf-7352-426a-8c3a-94904bd8616c-tmp\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205859 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-client-ca\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205875 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dnmg\" (UniqueName: \"kubernetes.io/projected/b9ad9bcf-7352-426a-8c3a-94904bd8616c-kube-api-access-7dnmg\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205895 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/158c1d70-030a-44de-b9af-51dafc4857f5-serving-cert\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.207612 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d3abf3af-b96a-44fa-bd40-1c92bab19b92" (UID: "d3abf3af-b96a-44fa-bd40-1c92bab19b92"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.207795 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-tmp" (OuterVolumeSpecName: "tmp") pod "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" (UID: "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.207971 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3abf3af-b96a-44fa-bd40-1c92bab19b92-tmp" (OuterVolumeSpecName: "tmp") pod "d3abf3af-b96a-44fa-bd40-1c92bab19b92" (UID: "d3abf3af-b96a-44fa-bd40-1c92bab19b92"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.208421 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-config" (OuterVolumeSpecName: "config") pod "d3abf3af-b96a-44fa-bd40-1c92bab19b92" (UID: "d3abf3af-b96a-44fa-bd40-1c92bab19b92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.208468 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-config" (OuterVolumeSpecName: "config") pod "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" (UID: "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.208528 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-client-ca" (OuterVolumeSpecName: "client-ca") pod "d3abf3af-b96a-44fa-bd40-1c92bab19b92" (UID: "d3abf3af-b96a-44fa-bd40-1c92bab19b92"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.208573 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-client-ca" (OuterVolumeSpecName: "client-ca") pod "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" (UID: "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.211843 5103 scope.go:117] "RemoveContainer" containerID="8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.213339 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3abf3af-b96a-44fa-bd40-1c92bab19b92-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d3abf3af-b96a-44fa-bd40-1c92bab19b92" (UID: "d3abf3af-b96a-44fa-bd40-1c92bab19b92"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.213427 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-kube-api-access-qgvv5" (OuterVolumeSpecName: "kube-api-access-qgvv5") pod "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" (UID: "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204"). InnerVolumeSpecName "kube-api-access-qgvv5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.214858 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3abf3af-b96a-44fa-bd40-1c92bab19b92-kube-api-access-4fxzw" (OuterVolumeSpecName: "kube-api-access-4fxzw") pod "d3abf3af-b96a-44fa-bd40-1c92bab19b92" (UID: "d3abf3af-b96a-44fa-bd40-1c92bab19b92"). InnerVolumeSpecName "kube-api-access-4fxzw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: E0130 00:16:22.218937 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88\": container with ID starting with 8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88 not found: ID does not exist" containerID="8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.219001 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88"} err="failed to get container status \"8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88\": rpc error: code = NotFound desc = could not find container \"8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88\": container with ID starting with 8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88 not found: ID does not exist" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.219037 5103 scope.go:117] "RemoveContainer" containerID="712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.220967 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" (UID: "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.247821 5103 scope.go:117] "RemoveContainer" containerID="712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9" Jan 30 00:16:22 crc kubenswrapper[5103]: E0130 00:16:22.248279 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9\": container with ID starting with 712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9 not found: ID does not exist" containerID="712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.248341 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9"} err="failed to get container status \"712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9\": rpc error: code = NotFound desc = could not find container \"712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9\": container with ID starting with 712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9 not found: ID does not exist" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307120 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ad9bcf-7352-426a-8c3a-94904bd8616c-serving-cert\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307221 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-client-ca\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307253 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-76r7h\" (UniqueName: \"kubernetes.io/projected/158c1d70-030a-44de-b9af-51dafc4857f5-kube-api-access-76r7h\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307304 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-proxy-ca-bundles\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307331 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/158c1d70-030a-44de-b9af-51dafc4857f5-tmp\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307363 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9bcf-7352-426a-8c3a-94904bd8616c-tmp\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307395 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-client-ca\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307630 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7dnmg\" (UniqueName: \"kubernetes.io/projected/b9ad9bcf-7352-426a-8c3a-94904bd8616c-kube-api-access-7dnmg\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307674 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/158c1d70-030a-44de-b9af-51dafc4857f5-serving-cert\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307715 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-config\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307762 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-config\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307811 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307827 5103 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307838 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307849 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3abf3af-b96a-44fa-bd40-1c92bab19b92-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307858 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307869 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4fxzw\" (UniqueName: \"kubernetes.io/projected/d3abf3af-b96a-44fa-bd40-1c92bab19b92-kube-api-access-4fxzw\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307880 5103 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307891 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abf3af-b96a-44fa-bd40-1c92bab19b92-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307903 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgvv5\" (UniqueName: \"kubernetes.io/projected/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-kube-api-access-qgvv5\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307914 5103 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307924 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.308451 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9bcf-7352-426a-8c3a-94904bd8616c-tmp\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.308719 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-client-ca\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.308804 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/158c1d70-030a-44de-b9af-51dafc4857f5-tmp\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.309216 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-proxy-ca-bundles\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.309369 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-config\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.309550 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-client-ca\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.310121 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-config\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.314013 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/158c1d70-030a-44de-b9af-51dafc4857f5-serving-cert\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.314380 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ad9bcf-7352-426a-8c3a-94904bd8616c-serving-cert\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.327870 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dnmg\" (UniqueName: \"kubernetes.io/projected/b9ad9bcf-7352-426a-8c3a-94904bd8616c-kube-api-access-7dnmg\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.328790 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-76r7h\" (UniqueName: \"kubernetes.io/projected/158c1d70-030a-44de-b9af-51dafc4857f5-kube-api-access-76r7h\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.422828 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.439103 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.531203 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-spmxr"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.536523 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-spmxr"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.552795 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.561270 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.704278 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57969d489d-xkzdh"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.746176 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb"] Jan 30 00:16:22 crc kubenswrapper[5103]: W0130 00:16:22.749075 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9ad9bcf_7352_426a_8c3a_94904bd8616c.slice/crio-8cf5d8f319e48117f88db881cdbccdb15a5652356eefcbc633994caf67c03759 WatchSource:0}: Error finding container 8cf5d8f319e48117f88db881cdbccdb15a5652356eefcbc633994caf67c03759: Status 404 returned error can't find the container with id 8cf5d8f319e48117f88db881cdbccdb15a5652356eefcbc633994caf67c03759 Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.875131 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" path="/var/lib/kubelet/pods/d3abf3af-b96a-44fa-bd40-1c92bab19b92/volumes" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.875679 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" path="/var/lib/kubelet/pods/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204/volumes" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.922111 5103 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-spmxr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.922173 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.203792 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" event={"ID":"158c1d70-030a-44de-b9af-51dafc4857f5","Type":"ContainerStarted","Data":"68c4fdb1edcdd731feb109f9311b3ace5c45b3a322cf99d6dc3e0c1c7fb092ed"} Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.204606 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" event={"ID":"158c1d70-030a-44de-b9af-51dafc4857f5","Type":"ContainerStarted","Data":"6768c319ec0bab4c451f22f569c0670d70e61c645a1dcb38b3fc1ee646eb326a"} Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.204652 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.209657 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" event={"ID":"b9ad9bcf-7352-426a-8c3a-94904bd8616c","Type":"ContainerStarted","Data":"23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903"} Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.209708 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" event={"ID":"b9ad9bcf-7352-426a-8c3a-94904bd8616c","Type":"ContainerStarted","Data":"8cf5d8f319e48117f88db881cdbccdb15a5652356eefcbc633994caf67c03759"} Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.210813 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.217692 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.248705 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" podStartSLOduration=2.248687877 podStartE2EDuration="2.248687877s" podCreationTimestamp="2026-01-30 00:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:16:23.246613906 +0000 UTC m=+373.118111978" watchObservedRunningTime="2026-01-30 00:16:23.248687877 +0000 UTC m=+373.120185949" Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.252801 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" podStartSLOduration=2.252786588 podStartE2EDuration="2.252786588s" podCreationTimestamp="2026-01-30 00:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:16:23.228917362 +0000 UTC m=+373.100415434" watchObservedRunningTime="2026-01-30 00:16:23.252786588 +0000 UTC m=+373.124284650" Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.797294 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:58 crc kubenswrapper[5103]: I0130 00:16:58.493736 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:16:58 crc kubenswrapper[5103]: I0130 00:16:58.495417 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:17:01 crc kubenswrapper[5103]: I0130 00:17:01.646375 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb"] Jan 30 00:17:01 crc kubenswrapper[5103]: I0130 00:17:01.646838 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" podUID="b9ad9bcf-7352-426a-8c3a-94904bd8616c" containerName="route-controller-manager" containerID="cri-o://23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903" gracePeriod=30 Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.003992 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.036807 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w"] Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.037526 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b9ad9bcf-7352-426a-8c3a-94904bd8616c" containerName="route-controller-manager" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.037552 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ad9bcf-7352-426a-8c3a-94904bd8616c" containerName="route-controller-manager" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.037836 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b9ad9bcf-7352-426a-8c3a-94904bd8616c" containerName="route-controller-manager" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.044942 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.055532 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w"] Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.157918 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-config\") pod \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.157993 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-client-ca\") pod \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158083 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dnmg\" (UniqueName: \"kubernetes.io/projected/b9ad9bcf-7352-426a-8c3a-94904bd8616c-kube-api-access-7dnmg\") pod \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158161 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ad9bcf-7352-426a-8c3a-94904bd8616c-serving-cert\") pod \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158188 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9bcf-7352-426a-8c3a-94904bd8616c-tmp\") pod \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158392 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31517abb-bb81-4882-9d24-462e89cad611-serving-cert\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158426 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31517abb-bb81-4882-9d24-462e89cad611-config\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158474 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdgt9\" (UniqueName: \"kubernetes.io/projected/31517abb-bb81-4882-9d24-462e89cad611-kube-api-access-qdgt9\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158562 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31517abb-bb81-4882-9d24-462e89cad611-client-ca\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158648 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/31517abb-bb81-4882-9d24-462e89cad611-tmp\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158680 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ad9bcf-7352-426a-8c3a-94904bd8616c-tmp" (OuterVolumeSpecName: "tmp") pod "b9ad9bcf-7352-426a-8c3a-94904bd8616c" (UID: "b9ad9bcf-7352-426a-8c3a-94904bd8616c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.159282 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9ad9bcf-7352-426a-8c3a-94904bd8616c" (UID: "b9ad9bcf-7352-426a-8c3a-94904bd8616c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.159359 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-config" (OuterVolumeSpecName: "config") pod "b9ad9bcf-7352-426a-8c3a-94904bd8616c" (UID: "b9ad9bcf-7352-426a-8c3a-94904bd8616c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.169366 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ad9bcf-7352-426a-8c3a-94904bd8616c-kube-api-access-7dnmg" (OuterVolumeSpecName: "kube-api-access-7dnmg") pod "b9ad9bcf-7352-426a-8c3a-94904bd8616c" (UID: "b9ad9bcf-7352-426a-8c3a-94904bd8616c"). InnerVolumeSpecName "kube-api-access-7dnmg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.169378 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ad9bcf-7352-426a-8c3a-94904bd8616c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9ad9bcf-7352-426a-8c3a-94904bd8616c" (UID: "b9ad9bcf-7352-426a-8c3a-94904bd8616c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259589 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31517abb-bb81-4882-9d24-462e89cad611-serving-cert\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259663 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31517abb-bb81-4882-9d24-462e89cad611-config\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259704 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qdgt9\" (UniqueName: \"kubernetes.io/projected/31517abb-bb81-4882-9d24-462e89cad611-kube-api-access-qdgt9\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259802 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31517abb-bb81-4882-9d24-462e89cad611-client-ca\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259890 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/31517abb-bb81-4882-9d24-462e89cad611-tmp\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259956 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7dnmg\" (UniqueName: \"kubernetes.io/projected/b9ad9bcf-7352-426a-8c3a-94904bd8616c-kube-api-access-7dnmg\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259975 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ad9bcf-7352-426a-8c3a-94904bd8616c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.260024 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9bcf-7352-426a-8c3a-94904bd8616c-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.260046 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.260125 5103 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.260921 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/31517abb-bb81-4882-9d24-462e89cad611-tmp\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.261621 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31517abb-bb81-4882-9d24-462e89cad611-config\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.261765 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31517abb-bb81-4882-9d24-462e89cad611-client-ca\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.265246 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31517abb-bb81-4882-9d24-462e89cad611-serving-cert\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.293255 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdgt9\" (UniqueName: \"kubernetes.io/projected/31517abb-bb81-4882-9d24-462e89cad611-kube-api-access-qdgt9\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.368760 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.494916 5103 generic.go:358] "Generic (PLEG): container finished" podID="b9ad9bcf-7352-426a-8c3a-94904bd8616c" containerID="23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903" exitCode=0 Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.496034 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" event={"ID":"b9ad9bcf-7352-426a-8c3a-94904bd8616c","Type":"ContainerDied","Data":"23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903"} Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.496103 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" event={"ID":"b9ad9bcf-7352-426a-8c3a-94904bd8616c","Type":"ContainerDied","Data":"8cf5d8f319e48117f88db881cdbccdb15a5652356eefcbc633994caf67c03759"} Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.496132 5103 scope.go:117] "RemoveContainer" containerID="23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.496335 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.560472 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb"] Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.565142 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb"] Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.566701 5103 scope.go:117] "RemoveContainer" containerID="23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903" Jan 30 00:17:02 crc kubenswrapper[5103]: E0130 00:17:02.567817 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903\": container with ID starting with 23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903 not found: ID does not exist" containerID="23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.567857 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903"} err="failed to get container status \"23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903\": rpc error: code = NotFound desc = could not find container \"23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903\": container with ID starting with 23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903 not found: ID does not exist" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.665877 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w"] Jan 30 00:17:02 crc kubenswrapper[5103]: W0130 00:17:02.673763 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31517abb_bb81_4882_9d24_462e89cad611.slice/crio-6be16c3da02e3541efaad9a6ccf23cad0f520e1f616c1f0f96909a61742a42a6 WatchSource:0}: Error finding container 6be16c3da02e3541efaad9a6ccf23cad0f520e1f616c1f0f96909a61742a42a6: Status 404 returned error can't find the container with id 6be16c3da02e3541efaad9a6ccf23cad0f520e1f616c1f0f96909a61742a42a6 Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.879281 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ad9bcf-7352-426a-8c3a-94904bd8616c" path="/var/lib/kubelet/pods/b9ad9bcf-7352-426a-8c3a-94904bd8616c/volumes" Jan 30 00:17:03 crc kubenswrapper[5103]: I0130 00:17:03.507352 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" event={"ID":"31517abb-bb81-4882-9d24-462e89cad611","Type":"ContainerStarted","Data":"c3723f8482f35cd737f362bcd21a14284d94808d4d4cffff06ef6755f73b52e6"} Jan 30 00:17:03 crc kubenswrapper[5103]: I0130 00:17:03.507432 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" event={"ID":"31517abb-bb81-4882-9d24-462e89cad611","Type":"ContainerStarted","Data":"6be16c3da02e3541efaad9a6ccf23cad0f520e1f616c1f0f96909a61742a42a6"} Jan 30 00:17:03 crc kubenswrapper[5103]: I0130 00:17:03.507686 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:03 crc kubenswrapper[5103]: I0130 00:17:03.515809 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:03 crc kubenswrapper[5103]: I0130 00:17:03.532657 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" podStartSLOduration=2.532634285 podStartE2EDuration="2.532634285s" podCreationTimestamp="2026-01-30 00:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:17:03.531489937 +0000 UTC m=+413.402987999" watchObservedRunningTime="2026-01-30 00:17:03.532634285 +0000 UTC m=+413.404132337" Jan 30 00:17:28 crc kubenswrapper[5103]: I0130 00:17:28.493551 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:17:28 crc kubenswrapper[5103]: I0130 00:17:28.494127 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.493784 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.494669 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.494755 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.495863 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"346d68dc943f95b7c3635e3ca8c695bae2c81b93ca2769fe09d08ce315c33590"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.495995 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://346d68dc943f95b7c3635e3ca8c695bae2c81b93ca2769fe09d08ce315c33590" gracePeriod=600 Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.945834 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="346d68dc943f95b7c3635e3ca8c695bae2c81b93ca2769fe09d08ce315c33590" exitCode=0 Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.945925 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"346d68dc943f95b7c3635e3ca8c695bae2c81b93ca2769fe09d08ce315c33590"} Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.946305 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"399cda3a0f0aa765b5f32eacaf816dc8466c112e0b2d2cfeb27afa2df61ade76"} Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.946334 5103 scope.go:117] "RemoveContainer" containerID="47d4649f628f9ff08c1eae857ce8b6a70f66ec474c9229aafcc4d26442b014bb" Jan 30 00:18:03 crc kubenswrapper[5103]: I0130 00:18:03.097808 5103 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:18:03 crc kubenswrapper[5103]: I0130 00:18:03.207185 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r9ddz"] Jan 30 00:18:09 crc kubenswrapper[5103]: I0130 00:18:09.723015 5103 ???:1] "http: TLS handshake error from 192.168.126.11:49936: no serving certificate available for the kubelet" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.261151 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerName="oauth-openshift" containerID="cri-o://661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53" gracePeriod=15 Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.730624 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.784283 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-8696675b97-lqpdm"] Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.785177 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerName="oauth-openshift" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.785207 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerName="oauth-openshift" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.785318 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerName="oauth-openshift" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.794750 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8696675b97-lqpdm"] Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.794913 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797103 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-login\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797205 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-provider-selection\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797258 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-router-certs\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797329 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-trusted-ca-bundle\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797363 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-ocp-branding-template\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797392 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-dir\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797416 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-cliconfig\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797522 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-idp-0-file-data\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797516 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797559 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-error\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797592 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-policies\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797622 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h6wk\" (UniqueName: \"kubernetes.io/projected/10feec13-3e3a-46a2-8fdd-c1098eebd334-kube-api-access-7h6wk\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797644 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-serving-cert\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797669 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-service-ca\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797729 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-session\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797949 5103 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.798367 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.798864 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.798887 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.799432 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.804252 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.804278 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.804419 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.807427 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10feec13-3e3a-46a2-8fdd-c1098eebd334-kube-api-access-7h6wk" (OuterVolumeSpecName: "kube-api-access-7h6wk") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "kube-api-access-7h6wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.807473 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.813480 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.821253 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.822196 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.824754 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898665 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898731 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898755 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-login\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898780 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-session\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898823 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-service-ca\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898867 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898901 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-router-certs\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898937 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z9sv\" (UniqueName: \"kubernetes.io/projected/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-kube-api-access-6z9sv\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898961 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-audit-dir\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898998 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-audit-policies\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899021 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899074 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899115 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-error\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899147 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899226 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899241 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899256 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899269 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899282 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899295 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899307 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899321 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899335 5103 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899347 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7h6wk\" (UniqueName: \"kubernetes.io/projected/10feec13-3e3a-46a2-8fdd-c1098eebd334-kube-api-access-7h6wk\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899358 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899370 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899385 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.000891 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-error\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.000990 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.001358 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.001507 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.001825 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-login\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.001864 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-session\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.001946 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-service-ca\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002002 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002102 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-router-certs\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002172 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6z9sv\" (UniqueName: \"kubernetes.io/projected/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-kube-api-access-6z9sv\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002208 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-audit-dir\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002284 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-audit-policies\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002315 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002403 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.003658 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-audit-dir\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.006590 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.006729 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-service-ca\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.006895 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-audit-policies\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.006949 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.007400 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-error\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.007652 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.009150 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-login\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.010810 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.011894 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-router-certs\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.013242 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.013951 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.014750 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-session\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.021473 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z9sv\" (UniqueName: \"kubernetes.io/projected/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-kube-api-access-6z9sv\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.152888 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.154758 5103 generic.go:358] "Generic (PLEG): container finished" podID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerID="661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53" exitCode=0 Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.154818 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" event={"ID":"10feec13-3e3a-46a2-8fdd-c1098eebd334","Type":"ContainerDied","Data":"661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53"} Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.154858 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" event={"ID":"10feec13-3e3a-46a2-8fdd-c1098eebd334","Type":"ContainerDied","Data":"e3d46683d3f3d86228a063dcb193d36e8067e6dad542d18de17ac86ad6dc3b86"} Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.154885 5103 scope.go:117] "RemoveContainer" containerID="661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.155093 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.177489 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r9ddz"] Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.182895 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r9ddz"] Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.191784 5103 scope.go:117] "RemoveContainer" containerID="661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53" Jan 30 00:18:29 crc kubenswrapper[5103]: E0130 00:18:29.192369 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53\": container with ID starting with 661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53 not found: ID does not exist" containerID="661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.192565 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53"} err="failed to get container status \"661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53\": rpc error: code = NotFound desc = could not find container \"661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53\": container with ID starting with 661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53 not found: ID does not exist" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.398361 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8696675b97-lqpdm"] Jan 30 00:18:30 crc kubenswrapper[5103]: I0130 00:18:30.164366 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" event={"ID":"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d","Type":"ContainerStarted","Data":"9301a08c12aea9cb302ac0b756f739416190283b29d56b91af5ee52511ca98cd"} Jan 30 00:18:30 crc kubenswrapper[5103]: I0130 00:18:30.165180 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" event={"ID":"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d","Type":"ContainerStarted","Data":"cd88805702bcd9241ba5e98510c9c7947528f28b61ee5c64b8f4362451d75c8c"} Jan 30 00:18:30 crc kubenswrapper[5103]: I0130 00:18:30.167544 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:30 crc kubenswrapper[5103]: I0130 00:18:30.192533 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" podStartSLOduration=27.192514579 podStartE2EDuration="27.192514579s" podCreationTimestamp="2026-01-30 00:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:18:30.187244119 +0000 UTC m=+500.058742181" watchObservedRunningTime="2026-01-30 00:18:30.192514579 +0000 UTC m=+500.064012631" Jan 30 00:18:30 crc kubenswrapper[5103]: I0130 00:18:30.495180 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:30 crc kubenswrapper[5103]: I0130 00:18:30.887681 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" path="/var/lib/kubelet/pods/10feec13-3e3a-46a2-8fdd-c1098eebd334/volumes" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.467700 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7c7gb"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.468695 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7c7gb" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="registry-server" containerID="cri-o://7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858" gracePeriod=30 Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.474988 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbjkv"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.475333 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nbjkv" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="registry-server" containerID="cri-o://775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147" gracePeriod=30 Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.489557 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mf247"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.489808 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" containerID="cri-o://bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1" gracePeriod=30 Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.504460 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59s8"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.504742 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z59s8" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="registry-server" containerID="cri-o://9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39" gracePeriod=30 Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.510295 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-m7wbv"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.515545 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.519450 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2rjzw"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.519945 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2rjzw" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="registry-server" containerID="cri-o://ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0" gracePeriod=30 Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.526526 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-m7wbv"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.648762 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtqr\" (UniqueName: \"kubernetes.io/projected/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-kube-api-access-djtqr\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.649227 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.649272 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-tmp\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.649323 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.753945 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.753989 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-tmp\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.754026 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.754065 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-djtqr\" (UniqueName: \"kubernetes.io/projected/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-kube-api-access-djtqr\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.754797 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-tmp\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.755219 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.760797 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.771814 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-djtqr\" (UniqueName: \"kubernetes.io/projected/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-kube-api-access-djtqr\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.899917 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.904808 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.912567 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.914808 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.929271 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-mf247_b15f695a-0fc1-4ab5-aad2-341f3bf6822d/marketplace-operator/1.log" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.929344 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.935254 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.058494 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-catalog-content\") pod \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.058855 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") pod \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.058912 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-utilities\") pod \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.058937 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-catalog-content\") pod \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.059616 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-tmp" (OuterVolumeSpecName: "tmp") pod "b15f695a-0fc1-4ab5-aad2-341f3bf6822d" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.059986 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-utilities" (OuterVolumeSpecName: "utilities") pod "6c3bfb26-42f9-43f4-8126-b941aea6ecca" (UID: "6c3bfb26-42f9-43f4-8126-b941aea6ecca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061570 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-tmp\") pod \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061636 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-catalog-content\") pod \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061667 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-catalog-content\") pod \"c312b248-250c-4b33-9c7a-f79c1e73a75b\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061690 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gwr8\" (UniqueName: \"kubernetes.io/projected/c312b248-250c-4b33-9c7a-f79c1e73a75b-kube-api-access-4gwr8\") pod \"c312b248-250c-4b33-9c7a-f79c1e73a75b\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061749 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgxx5\" (UniqueName: \"kubernetes.io/projected/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-kube-api-access-cgxx5\") pod \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061786 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdntz\" (UniqueName: \"kubernetes.io/projected/9807e5f5-fa63-4e0c-9b52-3c0044337c40-kube-api-access-zdntz\") pod \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061931 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-utilities\") pod \"c312b248-250c-4b33-9c7a-f79c1e73a75b\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061983 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") pod \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.062012 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-utilities\") pod \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.062036 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") pod \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.062578 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtd7x\" (UniqueName: \"kubernetes.io/projected/6c3bfb26-42f9-43f4-8126-b941aea6ecca-kube-api-access-qtd7x\") pod \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.062629 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-utilities\") pod \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.062704 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b15f695a-0fc1-4ab5-aad2-341f3bf6822d" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.063332 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.063357 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.063369 5103 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.063812 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-utilities" (OuterVolumeSpecName: "utilities") pod "ebb7f7db-c773-49f6-b58b-6bd929f25f3a" (UID: "ebb7f7db-c773-49f6-b58b-6bd929f25f3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.063834 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-utilities" (OuterVolumeSpecName: "utilities") pod "9807e5f5-fa63-4e0c-9b52-3c0044337c40" (UID: "9807e5f5-fa63-4e0c-9b52-3c0044337c40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.063883 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-utilities" (OuterVolumeSpecName: "utilities") pod "c312b248-250c-4b33-9c7a-f79c1e73a75b" (UID: "c312b248-250c-4b33-9c7a-f79c1e73a75b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.064999 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw" (OuterVolumeSpecName: "kube-api-access-6bzkw") pod "b15f695a-0fc1-4ab5-aad2-341f3bf6822d" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d"). InnerVolumeSpecName "kube-api-access-6bzkw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.065768 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b15f695a-0fc1-4ab5-aad2-341f3bf6822d" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.066973 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3bfb26-42f9-43f4-8126-b941aea6ecca-kube-api-access-qtd7x" (OuterVolumeSpecName: "kube-api-access-qtd7x") pod "6c3bfb26-42f9-43f4-8126-b941aea6ecca" (UID: "6c3bfb26-42f9-43f4-8126-b941aea6ecca"). InnerVolumeSpecName "kube-api-access-qtd7x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.078429 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9807e5f5-fa63-4e0c-9b52-3c0044337c40-kube-api-access-zdntz" (OuterVolumeSpecName: "kube-api-access-zdntz") pod "9807e5f5-fa63-4e0c-9b52-3c0044337c40" (UID: "9807e5f5-fa63-4e0c-9b52-3c0044337c40"). InnerVolumeSpecName "kube-api-access-zdntz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.080889 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c312b248-250c-4b33-9c7a-f79c1e73a75b-kube-api-access-4gwr8" (OuterVolumeSpecName: "kube-api-access-4gwr8") pod "c312b248-250c-4b33-9c7a-f79c1e73a75b" (UID: "c312b248-250c-4b33-9c7a-f79c1e73a75b"). InnerVolumeSpecName "kube-api-access-4gwr8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.081309 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c312b248-250c-4b33-9c7a-f79c1e73a75b" (UID: "c312b248-250c-4b33-9c7a-f79c1e73a75b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.083525 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-kube-api-access-cgxx5" (OuterVolumeSpecName: "kube-api-access-cgxx5") pod "ebb7f7db-c773-49f6-b58b-6bd929f25f3a" (UID: "ebb7f7db-c773-49f6-b58b-6bd929f25f3a"). InnerVolumeSpecName "kube-api-access-cgxx5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.110869 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebb7f7db-c773-49f6-b58b-6bd929f25f3a" (UID: "ebb7f7db-c773-49f6-b58b-6bd929f25f3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.112350 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9807e5f5-fa63-4e0c-9b52-3c0044337c40" (UID: "9807e5f5-fa63-4e0c-9b52-3c0044337c40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164469 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164504 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4gwr8\" (UniqueName: \"kubernetes.io/projected/c312b248-250c-4b33-9c7a-f79c1e73a75b-kube-api-access-4gwr8\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164519 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cgxx5\" (UniqueName: \"kubernetes.io/projected/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-kube-api-access-cgxx5\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164529 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zdntz\" (UniqueName: \"kubernetes.io/projected/9807e5f5-fa63-4e0c-9b52-3c0044337c40-kube-api-access-zdntz\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164541 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164551 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164562 5103 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164574 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qtd7x\" (UniqueName: \"kubernetes.io/projected/6c3bfb26-42f9-43f4-8126-b941aea6ecca-kube-api-access-qtd7x\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164587 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164598 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164608 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164615 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.184789 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c3bfb26-42f9-43f4-8126-b941aea6ecca" (UID: "6c3bfb26-42f9-43f4-8126-b941aea6ecca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.266528 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.339291 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-m7wbv"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.367434 5103 generic.go:358] "Generic (PLEG): container finished" podID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerID="7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858" exitCode=0 Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.367521 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.367544 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c7gb" event={"ID":"ebb7f7db-c773-49f6-b58b-6bd929f25f3a","Type":"ContainerDied","Data":"7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.367656 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c7gb" event={"ID":"ebb7f7db-c773-49f6-b58b-6bd929f25f3a","Type":"ContainerDied","Data":"b5cac0fe83167992a8ae22830c4af1a52661a8e624e0749533087d96d73359ba"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.367702 5103 scope.go:117] "RemoveContainer" containerID="7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.371697 5103 generic.go:358] "Generic (PLEG): container finished" podID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerID="775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147" exitCode=0 Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.371927 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.371929 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbjkv" event={"ID":"9807e5f5-fa63-4e0c-9b52-3c0044337c40","Type":"ContainerDied","Data":"775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.372239 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbjkv" event={"ID":"9807e5f5-fa63-4e0c-9b52-3c0044337c40","Type":"ContainerDied","Data":"61764b58f50ceebb2c7b19c23cfca937d7976fd5804c25d5eefbebe83ee09940"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.374927 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" event={"ID":"0180b3c6-131f-4a8c-ac9a-1b410e056ae2","Type":"ContainerStarted","Data":"40913b27a3c7f0d304d4dc9072ac1226e961880500f8c0246062547e1fc5e20b"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.377734 5103 generic.go:358] "Generic (PLEG): container finished" podID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerID="ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0" exitCode=0 Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.377847 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerDied","Data":"ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.377876 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerDied","Data":"06c99f794d6099db2b3382cfe3ae52362055fdf833d1abdcf54ef653697a4f26"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.377995 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.385644 5103 generic.go:358] "Generic (PLEG): container finished" podID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerID="9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39" exitCode=0 Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.385752 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59s8" event={"ID":"c312b248-250c-4b33-9c7a-f79c1e73a75b","Type":"ContainerDied","Data":"9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.385778 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59s8" event={"ID":"c312b248-250c-4b33-9c7a-f79c1e73a75b","Type":"ContainerDied","Data":"e00554ee3ee9141178c8c93a9b221de3559a21de326b50788319212bb34c00ff"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.385853 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.390208 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-mf247_b15f695a-0fc1-4ab5-aad2-341f3bf6822d/marketplace-operator/1.log" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.390262 5103 generic.go:358] "Generic (PLEG): container finished" podID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerID="bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1" exitCode=0 Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.390314 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerDied","Data":"bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.390344 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerDied","Data":"0368a0c326937f9c7deb7edf4ed88ddf03334595ee1cd83191767d2fb8e30f45"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.390505 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.394410 5103 scope.go:117] "RemoveContainer" containerID="a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.422658 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7c7gb"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.428237 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7c7gb"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.432567 5103 scope.go:117] "RemoveContainer" containerID="2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.464638 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbjkv"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.472745 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nbjkv"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.477702 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2rjzw"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.484762 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2rjzw"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.486245 5103 scope.go:117] "RemoveContainer" containerID="7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.486716 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858\": container with ID starting with 7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858 not found: ID does not exist" containerID="7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.486918 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858"} err="failed to get container status \"7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858\": rpc error: code = NotFound desc = could not find container \"7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858\": container with ID starting with 7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.487094 5103 scope.go:117] "RemoveContainer" containerID="a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.487598 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be\": container with ID starting with a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be not found: ID does not exist" containerID="a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.487643 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be"} err="failed to get container status \"a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be\": rpc error: code = NotFound desc = could not find container \"a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be\": container with ID starting with a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.487670 5103 scope.go:117] "RemoveContainer" containerID="2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.488265 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153\": container with ID starting with 2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153 not found: ID does not exist" containerID="2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.488290 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153"} err="failed to get container status \"2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153\": rpc error: code = NotFound desc = could not find container \"2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153\": container with ID starting with 2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.488306 5103 scope.go:117] "RemoveContainer" containerID="775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.498378 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mf247"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.507646 5103 scope.go:117] "RemoveContainer" containerID="1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.507755 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mf247"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.512574 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59s8"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.518464 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59s8"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.524794 5103 scope.go:117] "RemoveContainer" containerID="b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.540890 5103 scope.go:117] "RemoveContainer" containerID="775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.543341 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147\": container with ID starting with 775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147 not found: ID does not exist" containerID="775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.543397 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147"} err="failed to get container status \"775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147\": rpc error: code = NotFound desc = could not find container \"775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147\": container with ID starting with 775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.543430 5103 scope.go:117] "RemoveContainer" containerID="1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.543806 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5\": container with ID starting with 1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5 not found: ID does not exist" containerID="1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.543847 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5"} err="failed to get container status \"1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5\": rpc error: code = NotFound desc = could not find container \"1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5\": container with ID starting with 1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.543878 5103 scope.go:117] "RemoveContainer" containerID="b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.544153 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f\": container with ID starting with b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f not found: ID does not exist" containerID="b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.544183 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f"} err="failed to get container status \"b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f\": rpc error: code = NotFound desc = could not find container \"b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f\": container with ID starting with b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.544201 5103 scope.go:117] "RemoveContainer" containerID="ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.557858 5103 scope.go:117] "RemoveContainer" containerID="f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.587418 5103 scope.go:117] "RemoveContainer" containerID="fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.603475 5103 scope.go:117] "RemoveContainer" containerID="ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.603745 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0\": container with ID starting with ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0 not found: ID does not exist" containerID="ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.603776 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0"} err="failed to get container status \"ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0\": rpc error: code = NotFound desc = could not find container \"ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0\": container with ID starting with ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.603802 5103 scope.go:117] "RemoveContainer" containerID="f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.604008 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708\": container with ID starting with f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708 not found: ID does not exist" containerID="f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.604032 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708"} err="failed to get container status \"f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708\": rpc error: code = NotFound desc = could not find container \"f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708\": container with ID starting with f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.604066 5103 scope.go:117] "RemoveContainer" containerID="fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.604372 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f\": container with ID starting with fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f not found: ID does not exist" containerID="fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.604421 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f"} err="failed to get container status \"fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f\": rpc error: code = NotFound desc = could not find container \"fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f\": container with ID starting with fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.604437 5103 scope.go:117] "RemoveContainer" containerID="9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.620940 5103 scope.go:117] "RemoveContainer" containerID="92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.673939 5103 scope.go:117] "RemoveContainer" containerID="8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.691622 5103 scope.go:117] "RemoveContainer" containerID="9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.692071 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39\": container with ID starting with 9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39 not found: ID does not exist" containerID="9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.692112 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39"} err="failed to get container status \"9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39\": rpc error: code = NotFound desc = could not find container \"9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39\": container with ID starting with 9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.692139 5103 scope.go:117] "RemoveContainer" containerID="92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.692436 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292\": container with ID starting with 92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292 not found: ID does not exist" containerID="92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.692459 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292"} err="failed to get container status \"92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292\": rpc error: code = NotFound desc = could not find container \"92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292\": container with ID starting with 92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.692476 5103 scope.go:117] "RemoveContainer" containerID="8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.693151 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35\": container with ID starting with 8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35 not found: ID does not exist" containerID="8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.693176 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35"} err="failed to get container status \"8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35\": rpc error: code = NotFound desc = could not find container \"8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35\": container with ID starting with 8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.693191 5103 scope.go:117] "RemoveContainer" containerID="bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.708276 5103 scope.go:117] "RemoveContainer" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.727240 5103 scope.go:117] "RemoveContainer" containerID="bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.727659 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1\": container with ID starting with bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1 not found: ID does not exist" containerID="bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.727719 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1"} err="failed to get container status \"bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1\": rpc error: code = NotFound desc = could not find container \"bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1\": container with ID starting with bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.727756 5103 scope.go:117] "RemoveContainer" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.728157 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0\": container with ID starting with 3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0 not found: ID does not exist" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.728193 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0"} err="failed to get container status \"3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0\": rpc error: code = NotFound desc = could not find container \"3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0\": container with ID starting with 3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.876657 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" path="/var/lib/kubelet/pods/6c3bfb26-42f9-43f4-8126-b941aea6ecca/volumes" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.877773 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" path="/var/lib/kubelet/pods/9807e5f5-fa63-4e0c-9b52-3c0044337c40/volumes" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.878730 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" path="/var/lib/kubelet/pods/b15f695a-0fc1-4ab5-aad2-341f3bf6822d/volumes" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.879990 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" path="/var/lib/kubelet/pods/c312b248-250c-4b33-9c7a-f79c1e73a75b/volumes" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.880804 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" path="/var/lib/kubelet/pods/ebb7f7db-c773-49f6-b58b-6bd929f25f3a/volumes" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.405907 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" event={"ID":"0180b3c6-131f-4a8c-ac9a-1b410e056ae2","Type":"ContainerStarted","Data":"13897c56a6b4836c8273e8f74e9c06cfba82e6ca2ab6094ff098d5d5a49883b7"} Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.406126 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.411971 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.427389 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" podStartSLOduration=2.42735038 podStartE2EDuration="2.42735038s" podCreationTimestamp="2026-01-30 00:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:19:01.426816257 +0000 UTC m=+531.298314349" watchObservedRunningTime="2026-01-30 00:19:01.42735038 +0000 UTC m=+531.298848472" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.487286 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-29m6m"] Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489374 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489421 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489436 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489445 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489480 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489489 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489502 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489510 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489526 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489533 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489566 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489573 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489583 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489591 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489602 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489610 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489637 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489645 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489665 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489672 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489682 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489689 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489726 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489733 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489741 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489748 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489757 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489766 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489896 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489912 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489922 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489956 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489968 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489977 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.490123 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.490152 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.490311 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.513866 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-29m6m"] Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.514010 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.516277 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.700462 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-catalog-content\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.700549 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-utilities\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.700576 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wk6p\" (UniqueName: \"kubernetes.io/projected/3c68a080-5bee-4c96-8683-dfbc9187c20f-kube-api-access-6wk6p\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.802183 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-catalog-content\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.802377 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-utilities\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.802445 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wk6p\" (UniqueName: \"kubernetes.io/projected/3c68a080-5bee-4c96-8683-dfbc9187c20f-kube-api-access-6wk6p\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.802652 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-catalog-content\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.802726 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-utilities\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.824795 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wk6p\" (UniqueName: \"kubernetes.io/projected/3c68a080-5bee-4c96-8683-dfbc9187c20f-kube-api-access-6wk6p\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.838813 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.044408 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-29m6m"] Jan 30 00:19:02 crc kubenswrapper[5103]: W0130 00:19:02.057446 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c68a080_5bee_4c96_8683_dfbc9187c20f.slice/crio-93eda4d031aed494c523ca77f1e91f142fd42bf9c41c24e7b6cc12d812375e6e WatchSource:0}: Error finding container 93eda4d031aed494c523ca77f1e91f142fd42bf9c41c24e7b6cc12d812375e6e: Status 404 returned error can't find the container with id 93eda4d031aed494c523ca77f1e91f142fd42bf9c41c24e7b6cc12d812375e6e Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.078930 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vq6tr"] Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.090171 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.091980 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vq6tr"] Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.093154 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.105820 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-catalog-content\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.105970 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-utilities\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.106027 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpltr\" (UniqueName: \"kubernetes.io/projected/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-kube-api-access-kpltr\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.207392 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-catalog-content\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.207785 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-utilities\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.207813 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kpltr\" (UniqueName: \"kubernetes.io/projected/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-kube-api-access-kpltr\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.208667 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-catalog-content\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.208748 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-utilities\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.230701 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpltr\" (UniqueName: \"kubernetes.io/projected/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-kube-api-access-kpltr\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.415918 5103 generic.go:358] "Generic (PLEG): container finished" podID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerID="44a4d6d1f7b80ae12b217c95d1dbfec630c58aa07e5059535d601fbdbef544c4" exitCode=0 Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.417752 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29m6m" event={"ID":"3c68a080-5bee-4c96-8683-dfbc9187c20f","Type":"ContainerDied","Data":"44a4d6d1f7b80ae12b217c95d1dbfec630c58aa07e5059535d601fbdbef544c4"} Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.417873 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29m6m" event={"ID":"3c68a080-5bee-4c96-8683-dfbc9187c20f","Type":"ContainerStarted","Data":"93eda4d031aed494c523ca77f1e91f142fd42bf9c41c24e7b6cc12d812375e6e"} Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.436007 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.854362 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vq6tr"] Jan 30 00:19:02 crc kubenswrapper[5103]: W0130 00:19:02.865335 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda044cd80_0a4b_43d0_bfa8_107bddaa28fc.slice/crio-031e835559fcd26eb9fa3a47383c180a19034a4fb256138fe389435481e3d80f WatchSource:0}: Error finding container 031e835559fcd26eb9fa3a47383c180a19034a4fb256138fe389435481e3d80f: Status 404 returned error can't find the container with id 031e835559fcd26eb9fa3a47383c180a19034a4fb256138fe389435481e3d80f Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.427527 5103 generic.go:358] "Generic (PLEG): container finished" podID="a044cd80-0a4b-43d0-bfa8-107bddaa28fc" containerID="56796670bdd69ae09dc9e44816d52f869952458f5b4179e2b791a86641393e0f" exitCode=0 Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.427650 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq6tr" event={"ID":"a044cd80-0a4b-43d0-bfa8-107bddaa28fc","Type":"ContainerDied","Data":"56796670bdd69ae09dc9e44816d52f869952458f5b4179e2b791a86641393e0f"} Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.427698 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq6tr" event={"ID":"a044cd80-0a4b-43d0-bfa8-107bddaa28fc","Type":"ContainerStarted","Data":"031e835559fcd26eb9fa3a47383c180a19034a4fb256138fe389435481e3d80f"} Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.885962 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wmvfq"] Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.892807 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.896983 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.902565 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wmvfq"] Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.936169 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc2ed764-8df0-4a15-9d66-c2abad3ee367-utilities\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.936220 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc2ed764-8df0-4a15-9d66-c2abad3ee367-catalog-content\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.936271 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdhrl\" (UniqueName: \"kubernetes.io/projected/fc2ed764-8df0-4a15-9d66-c2abad3ee367-kube-api-access-cdhrl\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.936963 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-2wtrh"] Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.946603 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.968499 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-2wtrh"] Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.037775 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0646b67-80e8-42d2-8d99-b1870fd68749-trusted-ca\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.037828 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.037935 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc2ed764-8df0-4a15-9d66-c2abad3ee367-utilities\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.037982 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c0646b67-80e8-42d2-8d99-b1870fd68749-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038045 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc2ed764-8df0-4a15-9d66-c2abad3ee367-catalog-content\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038086 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-registry-tls\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038165 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-bound-sa-token\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038251 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c0646b67-80e8-42d2-8d99-b1870fd68749-registry-certificates\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038315 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cdhrl\" (UniqueName: \"kubernetes.io/projected/fc2ed764-8df0-4a15-9d66-c2abad3ee367-kube-api-access-cdhrl\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038415 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c0646b67-80e8-42d2-8d99-b1870fd68749-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038514 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbbgl\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-kube-api-access-hbbgl\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038597 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc2ed764-8df0-4a15-9d66-c2abad3ee367-utilities\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038680 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc2ed764-8df0-4a15-9d66-c2abad3ee367-catalog-content\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.058684 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.059099 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdhrl\" (UniqueName: \"kubernetes.io/projected/fc2ed764-8df0-4a15-9d66-c2abad3ee367-kube-api-access-cdhrl\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140038 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbbgl\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-kube-api-access-hbbgl\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140116 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0646b67-80e8-42d2-8d99-b1870fd68749-trusted-ca\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140159 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c0646b67-80e8-42d2-8d99-b1870fd68749-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140180 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-registry-tls\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140198 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-bound-sa-token\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140221 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c0646b67-80e8-42d2-8d99-b1870fd68749-registry-certificates\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140419 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c0646b67-80e8-42d2-8d99-b1870fd68749-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140846 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c0646b67-80e8-42d2-8d99-b1870fd68749-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.141760 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c0646b67-80e8-42d2-8d99-b1870fd68749-registry-certificates\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.141936 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0646b67-80e8-42d2-8d99-b1870fd68749-trusted-ca\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.143976 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c0646b67-80e8-42d2-8d99-b1870fd68749-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.144366 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-registry-tls\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.155254 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbbgl\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-kube-api-access-hbbgl\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.159966 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-bound-sa-token\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.220860 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.262641 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.440988 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq6tr" event={"ID":"a044cd80-0a4b-43d0-bfa8-107bddaa28fc","Type":"ContainerStarted","Data":"a0fc36499b6defb27a39ee5ad3e68913b8a723a3951ab5f92ca95e3af9a146d8"} Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.443451 5103 generic.go:358] "Generic (PLEG): container finished" podID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerID="e28c324607a0aa3b715230dc818fcdca18f72d1a3d44777010087b06d0384ded" exitCode=0 Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.443541 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29m6m" event={"ID":"3c68a080-5bee-4c96-8683-dfbc9187c20f","Type":"ContainerDied","Data":"e28c324607a0aa3b715230dc818fcdca18f72d1a3d44777010087b06d0384ded"} Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.443559 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wmvfq"] Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.492102 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4gz47"] Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.500739 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.501015 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gz47"] Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.502940 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.505139 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-2wtrh"] Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.551433 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dq9n\" (UniqueName: \"kubernetes.io/projected/5fd1ccc1-87a2-43d0-9183-1e907f804a16-kube-api-access-8dq9n\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.551604 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd1ccc1-87a2-43d0-9183-1e907f804a16-utilities\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.551710 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd1ccc1-87a2-43d0-9183-1e907f804a16-catalog-content\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.652963 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8dq9n\" (UniqueName: \"kubernetes.io/projected/5fd1ccc1-87a2-43d0-9183-1e907f804a16-kube-api-access-8dq9n\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.653840 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd1ccc1-87a2-43d0-9183-1e907f804a16-utilities\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.656394 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd1ccc1-87a2-43d0-9183-1e907f804a16-utilities\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.656599 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd1ccc1-87a2-43d0-9183-1e907f804a16-catalog-content\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.656954 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd1ccc1-87a2-43d0-9183-1e907f804a16-catalog-content\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.686532 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dq9n\" (UniqueName: \"kubernetes.io/projected/5fd1ccc1-87a2-43d0-9183-1e907f804a16-kube-api-access-8dq9n\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.833766 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.014420 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gz47"] Jan 30 00:19:05 crc kubenswrapper[5103]: W0130 00:19:05.021887 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd1ccc1_87a2_43d0_9183_1e907f804a16.slice/crio-4262c76c43500c907273e53f585b59e3b6a3c6dfeb1c1827654e4b804fa8b124 WatchSource:0}: Error finding container 4262c76c43500c907273e53f585b59e3b6a3c6dfeb1c1827654e4b804fa8b124: Status 404 returned error can't find the container with id 4262c76c43500c907273e53f585b59e3b6a3c6dfeb1c1827654e4b804fa8b124 Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.451096 5103 generic.go:358] "Generic (PLEG): container finished" podID="fc2ed764-8df0-4a15-9d66-c2abad3ee367" containerID="973bd685f3ccbbaddd3b49dd0f04cc38187a240864159c845f7057275144dd10" exitCode=0 Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.451190 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmvfq" event={"ID":"fc2ed764-8df0-4a15-9d66-c2abad3ee367","Type":"ContainerDied","Data":"973bd685f3ccbbaddd3b49dd0f04cc38187a240864159c845f7057275144dd10"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.451244 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmvfq" event={"ID":"fc2ed764-8df0-4a15-9d66-c2abad3ee367","Type":"ContainerStarted","Data":"9c710c8d15552c2920d34de83d6efca72c1149ac37c0a88d0cdf3e52b54843c7"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.455332 5103 generic.go:358] "Generic (PLEG): container finished" podID="a044cd80-0a4b-43d0-bfa8-107bddaa28fc" containerID="a0fc36499b6defb27a39ee5ad3e68913b8a723a3951ab5f92ca95e3af9a146d8" exitCode=0 Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.455452 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq6tr" event={"ID":"a044cd80-0a4b-43d0-bfa8-107bddaa28fc","Type":"ContainerDied","Data":"a0fc36499b6defb27a39ee5ad3e68913b8a723a3951ab5f92ca95e3af9a146d8"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.458834 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29m6m" event={"ID":"3c68a080-5bee-4c96-8683-dfbc9187c20f","Type":"ContainerStarted","Data":"445648755df7aa746d13412d63bc4c92d3a18d86920e1a4192ac33176f6aa9d6"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.461611 5103 generic.go:358] "Generic (PLEG): container finished" podID="5fd1ccc1-87a2-43d0-9183-1e907f804a16" containerID="de1441470b3cd6741e15b71b1ffd200dd84612fb9d93c2b2c686102ea1985d1a" exitCode=0 Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.461698 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gz47" event={"ID":"5fd1ccc1-87a2-43d0-9183-1e907f804a16","Type":"ContainerDied","Data":"de1441470b3cd6741e15b71b1ffd200dd84612fb9d93c2b2c686102ea1985d1a"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.461725 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gz47" event={"ID":"5fd1ccc1-87a2-43d0-9183-1e907f804a16","Type":"ContainerStarted","Data":"4262c76c43500c907273e53f585b59e3b6a3c6dfeb1c1827654e4b804fa8b124"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.467072 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" event={"ID":"c0646b67-80e8-42d2-8d99-b1870fd68749","Type":"ContainerStarted","Data":"67622fb5ebea95e5b06d5ffa8816f0019f894435e76ebdc3d22070183e5138d7"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.467112 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" event={"ID":"c0646b67-80e8-42d2-8d99-b1870fd68749","Type":"ContainerStarted","Data":"0a3cb38d200ae3e472aa0224baf5cbda58215d57aaba0960eaa727d40139c366"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.467580 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.511416 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" podStartSLOduration=2.511398872 podStartE2EDuration="2.511398872s" podCreationTimestamp="2026-01-30 00:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:19:05.50732368 +0000 UTC m=+535.378821762" watchObservedRunningTime="2026-01-30 00:19:05.511398872 +0000 UTC m=+535.382896944" Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.529246 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-29m6m" podStartSLOduration=3.657254541 podStartE2EDuration="4.529225397s" podCreationTimestamp="2026-01-30 00:19:01 +0000 UTC" firstStartedPulling="2026-01-30 00:19:02.417590667 +0000 UTC m=+532.289088719" lastFinishedPulling="2026-01-30 00:19:03.289561483 +0000 UTC m=+533.161059575" observedRunningTime="2026-01-30 00:19:05.528411516 +0000 UTC m=+535.399909578" watchObservedRunningTime="2026-01-30 00:19:05.529225397 +0000 UTC m=+535.400723469" Jan 30 00:19:06 crc kubenswrapper[5103]: I0130 00:19:06.475452 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmvfq" event={"ID":"fc2ed764-8df0-4a15-9d66-c2abad3ee367","Type":"ContainerStarted","Data":"d316bbfda43a2c82e13b05051fbf85abc35c00f84cb0fb689431080ec46ddc41"} Jan 30 00:19:06 crc kubenswrapper[5103]: I0130 00:19:06.477949 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq6tr" event={"ID":"a044cd80-0a4b-43d0-bfa8-107bddaa28fc","Type":"ContainerStarted","Data":"461b7270878817012b7cd8e6aae200369e0a4f00b80dbc35dbb6996276b704aa"} Jan 30 00:19:06 crc kubenswrapper[5103]: I0130 00:19:06.481158 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gz47" event={"ID":"5fd1ccc1-87a2-43d0-9183-1e907f804a16","Type":"ContainerStarted","Data":"222d54563f7ea33a689b0ac58815327cffe1f881b8d06be59183e3c7bde4b359"} Jan 30 00:19:06 crc kubenswrapper[5103]: I0130 00:19:06.518860 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vq6tr" podStartSLOduration=3.739159207 podStartE2EDuration="4.518844969s" podCreationTimestamp="2026-01-30 00:19:02 +0000 UTC" firstStartedPulling="2026-01-30 00:19:03.42886662 +0000 UTC m=+533.300364702" lastFinishedPulling="2026-01-30 00:19:04.208552372 +0000 UTC m=+534.080050464" observedRunningTime="2026-01-30 00:19:06.518429049 +0000 UTC m=+536.389927101" watchObservedRunningTime="2026-01-30 00:19:06.518844969 +0000 UTC m=+536.390343021" Jan 30 00:19:07 crc kubenswrapper[5103]: I0130 00:19:07.490071 5103 generic.go:358] "Generic (PLEG): container finished" podID="fc2ed764-8df0-4a15-9d66-c2abad3ee367" containerID="d316bbfda43a2c82e13b05051fbf85abc35c00f84cb0fb689431080ec46ddc41" exitCode=0 Jan 30 00:19:07 crc kubenswrapper[5103]: I0130 00:19:07.490255 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmvfq" event={"ID":"fc2ed764-8df0-4a15-9d66-c2abad3ee367","Type":"ContainerDied","Data":"d316bbfda43a2c82e13b05051fbf85abc35c00f84cb0fb689431080ec46ddc41"} Jan 30 00:19:07 crc kubenswrapper[5103]: I0130 00:19:07.492714 5103 generic.go:358] "Generic (PLEG): container finished" podID="5fd1ccc1-87a2-43d0-9183-1e907f804a16" containerID="222d54563f7ea33a689b0ac58815327cffe1f881b8d06be59183e3c7bde4b359" exitCode=0 Jan 30 00:19:07 crc kubenswrapper[5103]: I0130 00:19:07.492808 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gz47" event={"ID":"5fd1ccc1-87a2-43d0-9183-1e907f804a16","Type":"ContainerDied","Data":"222d54563f7ea33a689b0ac58815327cffe1f881b8d06be59183e3c7bde4b359"} Jan 30 00:19:08 crc kubenswrapper[5103]: I0130 00:19:08.498723 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmvfq" event={"ID":"fc2ed764-8df0-4a15-9d66-c2abad3ee367","Type":"ContainerStarted","Data":"453706a1a352476d7e0ea77dcb3ff53e3627f9ddd0b9c0b16a46ad3486167e12"} Jan 30 00:19:08 crc kubenswrapper[5103]: I0130 00:19:08.501652 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gz47" event={"ID":"5fd1ccc1-87a2-43d0-9183-1e907f804a16","Type":"ContainerStarted","Data":"97cea0009b727ca205b8b6934c8bf6828c8dbe09508d3e576e6f37feaf93ced4"} Jan 30 00:19:08 crc kubenswrapper[5103]: I0130 00:19:08.520606 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wmvfq" podStartSLOduration=4.824040318 podStartE2EDuration="5.520585185s" podCreationTimestamp="2026-01-30 00:19:03 +0000 UTC" firstStartedPulling="2026-01-30 00:19:05.452124772 +0000 UTC m=+535.323622824" lastFinishedPulling="2026-01-30 00:19:06.148669639 +0000 UTC m=+536.020167691" observedRunningTime="2026-01-30 00:19:08.515056207 +0000 UTC m=+538.386554279" watchObservedRunningTime="2026-01-30 00:19:08.520585185 +0000 UTC m=+538.392083237" Jan 30 00:19:08 crc kubenswrapper[5103]: I0130 00:19:08.543209 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4gz47" podStartSLOduration=3.899925153 podStartE2EDuration="4.54319403s" podCreationTimestamp="2026-01-30 00:19:04 +0000 UTC" firstStartedPulling="2026-01-30 00:19:05.462401959 +0000 UTC m=+535.333900011" lastFinishedPulling="2026-01-30 00:19:06.105670836 +0000 UTC m=+535.977168888" observedRunningTime="2026-01-30 00:19:08.540124373 +0000 UTC m=+538.411622445" watchObservedRunningTime="2026-01-30 00:19:08.54319403 +0000 UTC m=+538.414692082" Jan 30 00:19:11 crc kubenswrapper[5103]: I0130 00:19:11.839875 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:11 crc kubenswrapper[5103]: I0130 00:19:11.840370 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:11 crc kubenswrapper[5103]: I0130 00:19:11.897245 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:12 crc kubenswrapper[5103]: I0130 00:19:12.438786 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:12 crc kubenswrapper[5103]: I0130 00:19:12.438830 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:12 crc kubenswrapper[5103]: I0130 00:19:12.487666 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:12 crc kubenswrapper[5103]: I0130 00:19:12.564443 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:12 crc kubenswrapper[5103]: I0130 00:19:12.566597 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.221112 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.221771 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.274530 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.599069 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.834672 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.835122 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.888776 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:15 crc kubenswrapper[5103]: I0130 00:19:15.597722 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:27 crc kubenswrapper[5103]: I0130 00:19:27.499141 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:27 crc kubenswrapper[5103]: I0130 00:19:27.580905 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jfm6p"] Jan 30 00:19:52 crc kubenswrapper[5103]: I0130 00:19:52.649679 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" podUID="d69ff998-a349-40e4-8653-bfded7d60952" containerName="registry" containerID="cri-o://ffcde02830ce4ad7b97b4b84ec1411fc924348315e06fb6b2821c02bafdfedc3" gracePeriod=30 Jan 30 00:19:52 crc kubenswrapper[5103]: I0130 00:19:52.793088 5103 generic.go:358] "Generic (PLEG): container finished" podID="d69ff998-a349-40e4-8653-bfded7d60952" containerID="ffcde02830ce4ad7b97b4b84ec1411fc924348315e06fb6b2821c02bafdfedc3" exitCode=0 Jan 30 00:19:52 crc kubenswrapper[5103]: I0130 00:19:52.793218 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" event={"ID":"d69ff998-a349-40e4-8653-bfded7d60952","Type":"ContainerDied","Data":"ffcde02830ce4ad7b97b4b84ec1411fc924348315e06fb6b2821c02bafdfedc3"} Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.097785 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224326 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-bound-sa-token\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224442 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d69ff998-a349-40e4-8653-bfded7d60952-installation-pull-secrets\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224487 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-registry-tls\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224747 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-trusted-ca\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224826 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d69ff998-a349-40e4-8653-bfded7d60952-ca-trust-extracted\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224891 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-registry-certificates\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224997 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plqc7\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-kube-api-access-plqc7\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.225291 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.225668 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.225867 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.226485 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.234279 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.235871 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-kube-api-access-plqc7" (OuterVolumeSpecName: "kube-api-access-plqc7") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "kube-api-access-plqc7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.235988 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69ff998-a349-40e4-8653-bfded7d60952-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.236495 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.239307 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.260148 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d69ff998-a349-40e4-8653-bfded7d60952-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.327270 5103 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.327316 5103 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d69ff998-a349-40e4-8653-bfded7d60952-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.327336 5103 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.327352 5103 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d69ff998-a349-40e4-8653-bfded7d60952-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.327370 5103 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.327387 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-plqc7\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-kube-api-access-plqc7\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.803749 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.803779 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" event={"ID":"d69ff998-a349-40e4-8653-bfded7d60952","Type":"ContainerDied","Data":"4e145232ebdfb182b6a3d1e5a1b96cd199f982d856f76867803b018fe8ea7f1d"} Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.803872 5103 scope.go:117] "RemoveContainer" containerID="ffcde02830ce4ad7b97b4b84ec1411fc924348315e06fb6b2821c02bafdfedc3" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.858707 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jfm6p"] Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.869347 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jfm6p"] Jan 30 00:19:54 crc kubenswrapper[5103]: I0130 00:19:54.880847 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d69ff998-a349-40e4-8653-bfded7d60952" path="/var/lib/kubelet/pods/d69ff998-a349-40e4-8653-bfded7d60952/volumes" Jan 30 00:19:58 crc kubenswrapper[5103]: I0130 00:19:58.494138 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:19:58 crc kubenswrapper[5103]: I0130 00:19:58.494573 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.147448 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495540-rtq7h"] Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.149849 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d69ff998-a349-40e4-8653-bfded7d60952" containerName="registry" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.150065 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69ff998-a349-40e4-8653-bfded7d60952" containerName="registry" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.150581 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="d69ff998-a349-40e4-8653-bfded7d60952" containerName="registry" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.173447 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-rtq7h"] Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.173630 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.176066 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.176877 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.178221 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.338748 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kn8m\" (UniqueName: \"kubernetes.io/projected/d6b2c0b7-a88b-4f50-945a-938210a1c4cc-kube-api-access-4kn8m\") pod \"auto-csr-approver-29495540-rtq7h\" (UID: \"d6b2c0b7-a88b-4f50-945a-938210a1c4cc\") " pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.440539 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4kn8m\" (UniqueName: \"kubernetes.io/projected/d6b2c0b7-a88b-4f50-945a-938210a1c4cc-kube-api-access-4kn8m\") pod \"auto-csr-approver-29495540-rtq7h\" (UID: \"d6b2c0b7-a88b-4f50-945a-938210a1c4cc\") " pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.479854 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kn8m\" (UniqueName: \"kubernetes.io/projected/d6b2c0b7-a88b-4f50-945a-938210a1c4cc-kube-api-access-4kn8m\") pod \"auto-csr-approver-29495540-rtq7h\" (UID: \"d6b2c0b7-a88b-4f50-945a-938210a1c4cc\") " pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.504088 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.744829 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-rtq7h"] Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.861597 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" event={"ID":"d6b2c0b7-a88b-4f50-945a-938210a1c4cc","Type":"ContainerStarted","Data":"7f30ca1d97314819f4a96c58426c000df59b6dbd37b58982259d350429341e7d"} Jan 30 00:20:03 crc kubenswrapper[5103]: I0130 00:20:03.889128 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" event={"ID":"d6b2c0b7-a88b-4f50-945a-938210a1c4cc","Type":"ContainerStarted","Data":"2aa077047165a4cd73187258a4227191c8d3c969d4671d6a4bcf6e0c0698cf60"} Jan 30 00:20:03 crc kubenswrapper[5103]: I0130 00:20:03.905634 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" podStartSLOduration=1.327138092 podStartE2EDuration="3.905618603s" podCreationTimestamp="2026-01-30 00:20:00 +0000 UTC" firstStartedPulling="2026-01-30 00:20:00.752887434 +0000 UTC m=+590.624385496" lastFinishedPulling="2026-01-30 00:20:03.331367955 +0000 UTC m=+593.202866007" observedRunningTime="2026-01-30 00:20:03.903858429 +0000 UTC m=+593.775356481" watchObservedRunningTime="2026-01-30 00:20:03.905618603 +0000 UTC m=+593.777116665" Jan 30 00:20:04 crc kubenswrapper[5103]: I0130 00:20:04.043623 5103 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-5fhxj" Jan 30 00:20:04 crc kubenswrapper[5103]: I0130 00:20:04.068524 5103 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-5fhxj" Jan 30 00:20:04 crc kubenswrapper[5103]: I0130 00:20:04.899220 5103 generic.go:358] "Generic (PLEG): container finished" podID="d6b2c0b7-a88b-4f50-945a-938210a1c4cc" containerID="2aa077047165a4cd73187258a4227191c8d3c969d4671d6a4bcf6e0c0698cf60" exitCode=0 Jan 30 00:20:04 crc kubenswrapper[5103]: I0130 00:20:04.899367 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" event={"ID":"d6b2c0b7-a88b-4f50-945a-938210a1c4cc","Type":"ContainerDied","Data":"2aa077047165a4cd73187258a4227191c8d3c969d4671d6a4bcf6e0c0698cf60"} Jan 30 00:20:05 crc kubenswrapper[5103]: I0130 00:20:05.069961 5103 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-01 00:15:04 +0000 UTC" deadline="2026-02-25 12:32:25.004181789 +0000 UTC" Jan 30 00:20:05 crc kubenswrapper[5103]: I0130 00:20:05.070020 5103 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="636h12m19.934166546s" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.070669 5103 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-01 00:15:04 +0000 UTC" deadline="2026-02-25 12:28:03.634943452 +0000 UTC" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.070733 5103 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="636h7m57.564217027s" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.258766 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.329869 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kn8m\" (UniqueName: \"kubernetes.io/projected/d6b2c0b7-a88b-4f50-945a-938210a1c4cc-kube-api-access-4kn8m\") pod \"d6b2c0b7-a88b-4f50-945a-938210a1c4cc\" (UID: \"d6b2c0b7-a88b-4f50-945a-938210a1c4cc\") " Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.338939 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b2c0b7-a88b-4f50-945a-938210a1c4cc-kube-api-access-4kn8m" (OuterVolumeSpecName: "kube-api-access-4kn8m") pod "d6b2c0b7-a88b-4f50-945a-938210a1c4cc" (UID: "d6b2c0b7-a88b-4f50-945a-938210a1c4cc"). InnerVolumeSpecName "kube-api-access-4kn8m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.431304 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4kn8m\" (UniqueName: \"kubernetes.io/projected/d6b2c0b7-a88b-4f50-945a-938210a1c4cc-kube-api-access-4kn8m\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.916294 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" event={"ID":"d6b2c0b7-a88b-4f50-945a-938210a1c4cc","Type":"ContainerDied","Data":"7f30ca1d97314819f4a96c58426c000df59b6dbd37b58982259d350429341e7d"} Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.916363 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f30ca1d97314819f4a96c58426c000df59b6dbd37b58982259d350429341e7d" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.916369 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:11 crc kubenswrapper[5103]: I0130 00:20:11.180408 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:20:11 crc kubenswrapper[5103]: I0130 00:20:11.180675 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:20:28 crc kubenswrapper[5103]: I0130 00:20:28.493459 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:20:28 crc kubenswrapper[5103]: I0130 00:20:28.494238 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:20:58 crc kubenswrapper[5103]: I0130 00:20:58.494141 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:20:58 crc kubenswrapper[5103]: I0130 00:20:58.494888 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:20:58 crc kubenswrapper[5103]: I0130 00:20:58.494955 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:20:58 crc kubenswrapper[5103]: I0130 00:20:58.495999 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"399cda3a0f0aa765b5f32eacaf816dc8466c112e0b2d2cfeb27afa2df61ade76"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:20:58 crc kubenswrapper[5103]: I0130 00:20:58.496289 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://399cda3a0f0aa765b5f32eacaf816dc8466c112e0b2d2cfeb27afa2df61ade76" gracePeriod=600 Jan 30 00:20:58 crc kubenswrapper[5103]: I0130 00:20:58.632408 5103 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:20:59 crc kubenswrapper[5103]: I0130 00:20:59.279620 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="399cda3a0f0aa765b5f32eacaf816dc8466c112e0b2d2cfeb27afa2df61ade76" exitCode=0 Jan 30 00:20:59 crc kubenswrapper[5103]: I0130 00:20:59.279677 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"399cda3a0f0aa765b5f32eacaf816dc8466c112e0b2d2cfeb27afa2df61ade76"} Jan 30 00:20:59 crc kubenswrapper[5103]: I0130 00:20:59.280164 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"3697ed82987151cb30b4fcbd0a44d2a69c948067ae7968004a45b0cf18254730"} Jan 30 00:20:59 crc kubenswrapper[5103]: I0130 00:20:59.280207 5103 scope.go:117] "RemoveContainer" containerID="346d68dc943f95b7c3635e3ca8c695bae2c81b93ca2769fe09d08ce315c33590" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.153753 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495542-lzgvl"] Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.155329 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d6b2c0b7-a88b-4f50-945a-938210a1c4cc" containerName="oc" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.155350 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b2c0b7-a88b-4f50-945a-938210a1c4cc" containerName="oc" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.155482 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="d6b2c0b7-a88b-4f50-945a-938210a1c4cc" containerName="oc" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.161854 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-lzgvl"] Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.162003 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.165883 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.166377 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.166751 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.295026 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbwbd\" (UniqueName: \"kubernetes.io/projected/b6eabbd6-7a3e-476d-9412-948faeb44ce2-kube-api-access-zbwbd\") pod \"auto-csr-approver-29495542-lzgvl\" (UID: \"b6eabbd6-7a3e-476d-9412-948faeb44ce2\") " pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.396223 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zbwbd\" (UniqueName: \"kubernetes.io/projected/b6eabbd6-7a3e-476d-9412-948faeb44ce2-kube-api-access-zbwbd\") pod \"auto-csr-approver-29495542-lzgvl\" (UID: \"b6eabbd6-7a3e-476d-9412-948faeb44ce2\") " pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.430741 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbwbd\" (UniqueName: \"kubernetes.io/projected/b6eabbd6-7a3e-476d-9412-948faeb44ce2-kube-api-access-zbwbd\") pod \"auto-csr-approver-29495542-lzgvl\" (UID: \"b6eabbd6-7a3e-476d-9412-948faeb44ce2\") " pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.487571 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.718867 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-lzgvl"] Jan 30 00:22:01 crc kubenswrapper[5103]: I0130 00:22:01.719404 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" event={"ID":"b6eabbd6-7a3e-476d-9412-948faeb44ce2","Type":"ContainerStarted","Data":"dcdcc879cfd944f8ca59864e7a46850c5adc8d572255862c4e067ddc21b1abfe"} Jan 30 00:22:02 crc kubenswrapper[5103]: I0130 00:22:02.725875 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" event={"ID":"b6eabbd6-7a3e-476d-9412-948faeb44ce2","Type":"ContainerStarted","Data":"85eb57e0bc83856f4d4d5eb131d80fc4f6400f67738b8a99f839b0af0918444e"} Jan 30 00:22:02 crc kubenswrapper[5103]: I0130 00:22:02.744819 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" podStartSLOduration=1.597656546 podStartE2EDuration="2.744793578s" podCreationTimestamp="2026-01-30 00:22:00 +0000 UTC" firstStartedPulling="2026-01-30 00:22:00.743343804 +0000 UTC m=+710.614841856" lastFinishedPulling="2026-01-30 00:22:01.890480836 +0000 UTC m=+711.761978888" observedRunningTime="2026-01-30 00:22:02.740581784 +0000 UTC m=+712.612079836" watchObservedRunningTime="2026-01-30 00:22:02.744793578 +0000 UTC m=+712.616291630" Jan 30 00:22:03 crc kubenswrapper[5103]: I0130 00:22:03.731643 5103 generic.go:358] "Generic (PLEG): container finished" podID="b6eabbd6-7a3e-476d-9412-948faeb44ce2" containerID="85eb57e0bc83856f4d4d5eb131d80fc4f6400f67738b8a99f839b0af0918444e" exitCode=0 Jan 30 00:22:03 crc kubenswrapper[5103]: I0130 00:22:03.731745 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" event={"ID":"b6eabbd6-7a3e-476d-9412-948faeb44ce2","Type":"ContainerDied","Data":"85eb57e0bc83856f4d4d5eb131d80fc4f6400f67738b8a99f839b0af0918444e"} Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.022038 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.076386 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbwbd\" (UniqueName: \"kubernetes.io/projected/b6eabbd6-7a3e-476d-9412-948faeb44ce2-kube-api-access-zbwbd\") pod \"b6eabbd6-7a3e-476d-9412-948faeb44ce2\" (UID: \"b6eabbd6-7a3e-476d-9412-948faeb44ce2\") " Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.104509 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6eabbd6-7a3e-476d-9412-948faeb44ce2-kube-api-access-zbwbd" (OuterVolumeSpecName: "kube-api-access-zbwbd") pod "b6eabbd6-7a3e-476d-9412-948faeb44ce2" (UID: "b6eabbd6-7a3e-476d-9412-948faeb44ce2"). InnerVolumeSpecName "kube-api-access-zbwbd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.178155 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zbwbd\" (UniqueName: \"kubernetes.io/projected/b6eabbd6-7a3e-476d-9412-948faeb44ce2-kube-api-access-zbwbd\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.745698 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" event={"ID":"b6eabbd6-7a3e-476d-9412-948faeb44ce2","Type":"ContainerDied","Data":"dcdcc879cfd944f8ca59864e7a46850c5adc8d572255862c4e067ddc21b1abfe"} Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.745752 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcdcc879cfd944f8ca59864e7a46850c5adc8d572255862c4e067ddc21b1abfe" Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.745830 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.072069 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w6lt8"] Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.074257 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b6eabbd6-7a3e-476d-9412-948faeb44ce2" containerName="oc" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.074430 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6eabbd6-7a3e-476d-9412-948faeb44ce2" containerName="oc" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.074591 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b6eabbd6-7a3e-476d-9412-948faeb44ce2" containerName="oc" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.083572 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.088561 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6lt8"] Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.211574 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-utilities\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.211650 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-catalog-content\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.211680 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ppr4\" (UniqueName: \"kubernetes.io/projected/19842367-c9ea-467c-bd39-d3cd7c857c2b-kube-api-access-8ppr4\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.313171 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-catalog-content\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.313242 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8ppr4\" (UniqueName: \"kubernetes.io/projected/19842367-c9ea-467c-bd39-d3cd7c857c2b-kube-api-access-8ppr4\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.313358 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-utilities\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.314167 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-utilities\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.314498 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-catalog-content\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.344872 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ppr4\" (UniqueName: \"kubernetes.io/projected/19842367-c9ea-467c-bd39-d3cd7c857c2b-kube-api-access-8ppr4\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.407325 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.640517 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6lt8"] Jan 30 00:22:58 crc kubenswrapper[5103]: I0130 00:22:58.072951 5103 generic.go:358] "Generic (PLEG): container finished" podID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerID="e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a" exitCode=0 Jan 30 00:22:58 crc kubenswrapper[5103]: I0130 00:22:58.073093 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerDied","Data":"e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a"} Jan 30 00:22:58 crc kubenswrapper[5103]: I0130 00:22:58.073153 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerStarted","Data":"6f091d47ee89d8aab7c93e3b02a00d901ef85d7be59ad907b801db6f5ea7772a"} Jan 30 00:22:58 crc kubenswrapper[5103]: I0130 00:22:58.493693 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:22:58 crc kubenswrapper[5103]: I0130 00:22:58.494319 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:22:59 crc kubenswrapper[5103]: I0130 00:22:59.084954 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerStarted","Data":"8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88"} Jan 30 00:23:00 crc kubenswrapper[5103]: I0130 00:23:00.096565 5103 generic.go:358] "Generic (PLEG): container finished" podID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerID="8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88" exitCode=0 Jan 30 00:23:00 crc kubenswrapper[5103]: I0130 00:23:00.096773 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerDied","Data":"8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88"} Jan 30 00:23:01 crc kubenswrapper[5103]: I0130 00:23:01.105441 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerStarted","Data":"f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836"} Jan 30 00:23:01 crc kubenswrapper[5103]: I0130 00:23:01.139507 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w6lt8" podStartSLOduration=3.500160568 podStartE2EDuration="4.139480924s" podCreationTimestamp="2026-01-30 00:22:57 +0000 UTC" firstStartedPulling="2026-01-30 00:22:58.07462435 +0000 UTC m=+767.946122442" lastFinishedPulling="2026-01-30 00:22:58.713944736 +0000 UTC m=+768.585442798" observedRunningTime="2026-01-30 00:23:01.133242082 +0000 UTC m=+771.004740164" watchObservedRunningTime="2026-01-30 00:23:01.139480924 +0000 UTC m=+771.010979016" Jan 30 00:23:07 crc kubenswrapper[5103]: I0130 00:23:07.407793 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:23:07 crc kubenswrapper[5103]: I0130 00:23:07.408253 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:23:07 crc kubenswrapper[5103]: I0130 00:23:07.478183 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:23:08 crc kubenswrapper[5103]: I0130 00:23:08.223734 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:23:08 crc kubenswrapper[5103]: I0130 00:23:08.286154 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6lt8"] Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.175426 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w6lt8" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="registry-server" containerID="cri-o://f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836" gracePeriod=2 Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.635927 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.735332 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ppr4\" (UniqueName: \"kubernetes.io/projected/19842367-c9ea-467c-bd39-d3cd7c857c2b-kube-api-access-8ppr4\") pod \"19842367-c9ea-467c-bd39-d3cd7c857c2b\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.735423 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-utilities\") pod \"19842367-c9ea-467c-bd39-d3cd7c857c2b\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.735462 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-catalog-content\") pod \"19842367-c9ea-467c-bd39-d3cd7c857c2b\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.737505 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-utilities" (OuterVolumeSpecName: "utilities") pod "19842367-c9ea-467c-bd39-d3cd7c857c2b" (UID: "19842367-c9ea-467c-bd39-d3cd7c857c2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.744924 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19842367-c9ea-467c-bd39-d3cd7c857c2b-kube-api-access-8ppr4" (OuterVolumeSpecName: "kube-api-access-8ppr4") pod "19842367-c9ea-467c-bd39-d3cd7c857c2b" (UID: "19842367-c9ea-467c-bd39-d3cd7c857c2b"). InnerVolumeSpecName "kube-api-access-8ppr4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.795119 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19842367-c9ea-467c-bd39-d3cd7c857c2b" (UID: "19842367-c9ea-467c-bd39-d3cd7c857c2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.837027 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8ppr4\" (UniqueName: \"kubernetes.io/projected/19842367-c9ea-467c-bd39-d3cd7c857c2b-kube-api-access-8ppr4\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.837317 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.837444 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.187192 5103 generic.go:358] "Generic (PLEG): container finished" podID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerID="f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836" exitCode=0 Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.187317 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.187359 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerDied","Data":"f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836"} Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.187433 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerDied","Data":"6f091d47ee89d8aab7c93e3b02a00d901ef85d7be59ad907b801db6f5ea7772a"} Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.187493 5103 scope.go:117] "RemoveContainer" containerID="f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.224727 5103 scope.go:117] "RemoveContainer" containerID="8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.229931 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6lt8"] Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.239773 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w6lt8"] Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.255505 5103 scope.go:117] "RemoveContainer" containerID="e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.288619 5103 scope.go:117] "RemoveContainer" containerID="f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836" Jan 30 00:23:11 crc kubenswrapper[5103]: E0130 00:23:11.289247 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836\": container with ID starting with f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836 not found: ID does not exist" containerID="f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.289296 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836"} err="failed to get container status \"f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836\": rpc error: code = NotFound desc = could not find container \"f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836\": container with ID starting with f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836 not found: ID does not exist" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.289324 5103 scope.go:117] "RemoveContainer" containerID="8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88" Jan 30 00:23:11 crc kubenswrapper[5103]: E0130 00:23:11.289612 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88\": container with ID starting with 8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88 not found: ID does not exist" containerID="8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.289754 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88"} err="failed to get container status \"8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88\": rpc error: code = NotFound desc = could not find container \"8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88\": container with ID starting with 8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88 not found: ID does not exist" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.289854 5103 scope.go:117] "RemoveContainer" containerID="e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a" Jan 30 00:23:11 crc kubenswrapper[5103]: E0130 00:23:11.290442 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a\": container with ID starting with e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a not found: ID does not exist" containerID="e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.290493 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a"} err="failed to get container status \"e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a\": rpc error: code = NotFound desc = could not find container \"e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a\": container with ID starting with e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a not found: ID does not exist" Jan 30 00:23:12 crc kubenswrapper[5103]: I0130 00:23:12.888787 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" path="/var/lib/kubelet/pods/19842367-c9ea-467c-bd39-d3cd7c857c2b/volumes" Jan 30 00:23:28 crc kubenswrapper[5103]: I0130 00:23:28.494038 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:23:28 crc kubenswrapper[5103]: I0130 00:23:28.495450 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.247122 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6"] Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.248225 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="kube-rbac-proxy" containerID="cri-o://031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.248761 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="ovnkube-cluster-manager" containerID="cri-o://6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.380038 5103 generic.go:358] "Generic (PLEG): container finished" podID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerID="6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed" exitCode=0 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.380093 5103 generic.go:358] "Generic (PLEG): container finished" podID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerID="031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32" exitCode=0 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.380082 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" event={"ID":"7d918c96-a16b-4836-ac5a-83c3388f5468","Type":"ContainerDied","Data":"6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed"} Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.380131 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" event={"ID":"7d918c96-a16b-4836-ac5a-83c3388f5468","Type":"ContainerDied","Data":"031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32"} Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.449511 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8lwjf"] Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.449955 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-controller" containerID="cri-o://531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.450018 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="northd" containerID="cri-o://2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.450060 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="sbdb" containerID="cri-o://519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.450169 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="nbdb" containerID="cri-o://f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.450207 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-acl-logging" containerID="cri-o://7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.450188 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.450250 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-node" containerID="cri-o://f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.479929 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovnkube-controller" containerID="cri-o://2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.492033 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.520469 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm"] Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521190 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="extract-content" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521214 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="extract-content" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521233 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="kube-rbac-proxy" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521241 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="kube-rbac-proxy" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521250 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="ovnkube-cluster-manager" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521258 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="ovnkube-cluster-manager" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521274 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="registry-server" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521281 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="registry-server" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521310 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="extract-utilities" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521322 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="extract-utilities" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521455 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="kube-rbac-proxy" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521471 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="registry-server" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521487 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="ovnkube-cluster-manager" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.525663 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.575562 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d918c96-a16b-4836-ac5a-83c3388f5468-ovn-control-plane-metrics-cert\") pod \"7d918c96-a16b-4836-ac5a-83c3388f5468\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.575735 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-env-overrides\") pod \"7d918c96-a16b-4836-ac5a-83c3388f5468\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.575791 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-ovnkube-config\") pod \"7d918c96-a16b-4836-ac5a-83c3388f5468\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.575814 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prndc\" (UniqueName: \"kubernetes.io/projected/7d918c96-a16b-4836-ac5a-83c3388f5468-kube-api-access-prndc\") pod \"7d918c96-a16b-4836-ac5a-83c3388f5468\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.575983 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd448e3b-d40d-4a51-b124-8d2558cece6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.576079 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd448e3b-d40d-4a51-b124-8d2558cece6f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.576136 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnv9n\" (UniqueName: \"kubernetes.io/projected/fd448e3b-d40d-4a51-b124-8d2558cece6f-kube-api-access-fnv9n\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.576178 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd448e3b-d40d-4a51-b124-8d2558cece6f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.577264 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7d918c96-a16b-4836-ac5a-83c3388f5468" (UID: "7d918c96-a16b-4836-ac5a-83c3388f5468"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.577296 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7d918c96-a16b-4836-ac5a-83c3388f5468" (UID: "7d918c96-a16b-4836-ac5a-83c3388f5468"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.586652 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d918c96-a16b-4836-ac5a-83c3388f5468-kube-api-access-prndc" (OuterVolumeSpecName: "kube-api-access-prndc") pod "7d918c96-a16b-4836-ac5a-83c3388f5468" (UID: "7d918c96-a16b-4836-ac5a-83c3388f5468"). InnerVolumeSpecName "kube-api-access-prndc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.586699 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d918c96-a16b-4836-ac5a-83c3388f5468-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7d918c96-a16b-4836-ac5a-83c3388f5468" (UID: "7d918c96-a16b-4836-ac5a-83c3388f5468"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.677731 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fnv9n\" (UniqueName: \"kubernetes.io/projected/fd448e3b-d40d-4a51-b124-8d2558cece6f-kube-api-access-fnv9n\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.677783 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd448e3b-d40d-4a51-b124-8d2558cece6f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.677832 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd448e3b-d40d-4a51-b124-8d2558cece6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.677905 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd448e3b-d40d-4a51-b124-8d2558cece6f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.677974 5103 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d918c96-a16b-4836-ac5a-83c3388f5468-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.677995 5103 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.678011 5103 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.678025 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-prndc\" (UniqueName: \"kubernetes.io/projected/7d918c96-a16b-4836-ac5a-83c3388f5468-kube-api-access-prndc\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.678551 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd448e3b-d40d-4a51-b124-8d2558cece6f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.678578 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd448e3b-d40d-4a51-b124-8d2558cece6f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.681333 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd448e3b-d40d-4a51-b124-8d2558cece6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.696016 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnv9n\" (UniqueName: \"kubernetes.io/projected/fd448e3b-d40d-4a51-b124-8d2558cece6f-kube-api-access-fnv9n\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.720983 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8lwjf_b3efa2c9-9a52-46ea-b9ad-f708dd386e79/ovn-acl-logging/0.log" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.722608 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8lwjf_b3efa2c9-9a52-46ea-b9ad-f708dd386e79/ovn-controller/0.log" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.723666 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779405 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-etc-openvswitch\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779479 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-systemd-units\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779509 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-var-lib-openvswitch\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779511 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779544 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-config\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779629 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-node-log\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779685 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-env-overrides\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779717 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-kubelet\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779743 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-log-socket\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779791 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-systemd\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779852 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-openvswitch\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779892 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-netns\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779950 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-netd\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779990 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780032 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-script-lib\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780097 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-ovn\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780135 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovn-node-metrics-cert\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780205 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-bin\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780253 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-slash\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780300 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2mbn\" (UniqueName: \"kubernetes.io/projected/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-kube-api-access-j2mbn\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780321 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780353 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-ovn-kubernetes\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780379 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-node-log" (OuterVolumeSpecName: "node-log") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780884 5103 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780909 5103 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780927 5103 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780979 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781021 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781079 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781082 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781148 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781184 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-log-socket" (OuterVolumeSpecName: "log-socket") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781276 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781329 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781298 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781364 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781376 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-slash" (OuterVolumeSpecName: "host-slash") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781785 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781889 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.782139 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.786603 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.798879 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-525dp"] Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800195 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-kube-api-access-j2mbn" (OuterVolumeSpecName: "kube-api-access-j2mbn") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "kube-api-access-j2mbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800385 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovnkube-controller" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800429 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovnkube-controller" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800457 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-acl-logging" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800470 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-acl-logging" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800492 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="northd" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800506 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="northd" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800526 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800538 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800555 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="nbdb" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800566 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="nbdb" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800594 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kubecfg-setup" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800607 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kubecfg-setup" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800638 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="sbdb" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800651 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="sbdb" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800666 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-node" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800702 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-node" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800729 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-controller" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800741 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-controller" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800901 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-node" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800926 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="northd" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800942 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="sbdb" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800956 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovnkube-controller" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800974 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-acl-logging" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800992 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-controller" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.801007 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.801026 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="nbdb" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.805692 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.843756 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.882634 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-run-netns\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.882706 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4ps\" (UniqueName: \"kubernetes.io/projected/4f2eeeee-fabb-485c-b725-16a296f58c96-kube-api-access-jr4ps\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.882812 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-ovn\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.882888 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-node-log\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.882965 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f2eeeee-fabb-485c-b725-16a296f58c96-ovn-node-metrics-cert\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883020 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-kubelet\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883045 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883095 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-systemd\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883119 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-etc-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883245 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-slash\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883302 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-systemd-units\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883337 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-ovnkube-script-lib\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883402 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-run-ovn-kubernetes\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883482 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-ovnkube-config\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883566 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-log-socket\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883603 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-env-overrides\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883710 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-cni-bin\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883755 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-var-lib-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883847 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883886 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-cni-netd\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884133 5103 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884161 5103 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884178 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j2mbn\" (UniqueName: \"kubernetes.io/projected/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-kube-api-access-j2mbn\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884196 5103 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884211 5103 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884226 5103 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884241 5103 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884258 5103 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884272 5103 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884286 5103 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884301 5103 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884318 5103 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884333 5103 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884351 5103 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884366 5103 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884381 5103 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884399 5103 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.897732 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.985853 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-log-socket\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.985910 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-env-overrides\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.985945 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-cni-bin\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.985970 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-var-lib-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986023 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986027 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-log-socket\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986043 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-cni-netd\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986131 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-cni-netd\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986196 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986296 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-run-netns\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986331 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jr4ps\" (UniqueName: \"kubernetes.io/projected/4f2eeeee-fabb-485c-b725-16a296f58c96-kube-api-access-jr4ps\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986361 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-ovn\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986393 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-node-log\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986420 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f2eeeee-fabb-485c-b725-16a296f58c96-ovn-node-metrics-cert\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986462 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-kubelet\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986484 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986537 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-systemd\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986574 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-etc-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986608 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-slash\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986629 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-systemd-units\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986649 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-ovnkube-script-lib\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986665 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-cni-bin\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986714 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-ovn\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986480 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-var-lib-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986682 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-run-ovn-kubernetes\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986887 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-env-overrides\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986964 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-run-ovn-kubernetes\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.987149 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-run-netns\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.987493 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-node-log\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988153 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-kubelet\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988249 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988415 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-systemd\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988582 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-etc-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988642 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-slash\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988691 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-systemd-units\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988833 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-ovnkube-config\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.989716 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-ovnkube-config\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.989801 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-ovnkube-script-lib\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.992743 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f2eeeee-fabb-485c-b725-16a296f58c96-ovn-node-metrics-cert\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.016101 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr4ps\" (UniqueName: \"kubernetes.io/projected/4f2eeeee-fabb-485c-b725-16a296f58c96-kube-api-access-jr4ps\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.161152 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:35 crc kubenswrapper[5103]: W0130 00:23:35.182370 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f2eeeee_fabb_485c_b725_16a296f58c96.slice/crio-7ad003217c531c6d61e56d56975c8a661947f0a8082492729720fca36728bd32 WatchSource:0}: Error finding container 7ad003217c531c6d61e56d56975c8a661947f0a8082492729720fca36728bd32: Status 404 returned error can't find the container with id 7ad003217c531c6d61e56d56975c8a661947f0a8082492729720fca36728bd32 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.391093 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swfns_a7dd7e02-4357-4643-8c23-2fb57ba70405/kube-multus/0.log" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.391172 5103 generic.go:358] "Generic (PLEG): container finished" podID="a7dd7e02-4357-4643-8c23-2fb57ba70405" containerID="1924d7799e7a22d8b03bdfa9e3bf703744981a899ee974cc86920ae8c5fcbbcb" exitCode=2 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.391344 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swfns" event={"ID":"a7dd7e02-4357-4643-8c23-2fb57ba70405","Type":"ContainerDied","Data":"1924d7799e7a22d8b03bdfa9e3bf703744981a899ee974cc86920ae8c5fcbbcb"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.392852 5103 scope.go:117] "RemoveContainer" containerID="1924d7799e7a22d8b03bdfa9e3bf703744981a899ee974cc86920ae8c5fcbbcb" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.393299 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" event={"ID":"fd448e3b-d40d-4a51-b124-8d2558cece6f","Type":"ContainerStarted","Data":"800548153d9e6aba1afb85f182785a661eee618e7b42f4fe127d860272336e95"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.399650 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8lwjf_b3efa2c9-9a52-46ea-b9ad-f708dd386e79/ovn-acl-logging/0.log" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.400418 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8lwjf_b3efa2c9-9a52-46ea-b9ad-f708dd386e79/ovn-controller/0.log" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401723 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" exitCode=0 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401752 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" exitCode=0 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401760 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" exitCode=0 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401768 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" exitCode=0 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401775 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" exitCode=0 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401782 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" exitCode=0 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401789 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" exitCode=143 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401799 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" exitCode=143 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401893 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401925 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401937 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401947 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401955 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401964 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401976 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401984 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401989 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401996 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402004 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402010 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402015 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402020 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402025 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402029 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402034 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402039 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402065 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402075 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402085 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402092 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402098 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402102 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402107 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402112 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402117 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402121 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402125 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402132 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"38221fc62e1b3d592b338664053e425c486a6c0fa3cf8ead449229dbfc4659da"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402138 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402143 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402148 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402152 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402157 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402162 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402167 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402171 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402175 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402189 5103 scope.go:117] "RemoveContainer" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402372 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.406932 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" event={"ID":"7d918c96-a16b-4836-ac5a-83c3388f5468","Type":"ContainerDied","Data":"578d2296c0b9b147f002bab00ce887ae174a1dfc57c08f5d70b218ff4df99c74"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.406956 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.406965 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.407035 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.408499 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"7ad003217c531c6d61e56d56975c8a661947f0a8082492729720fca36728bd32"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.455218 5103 scope.go:117] "RemoveContainer" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.458562 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8lwjf"] Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.472469 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8lwjf"] Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.477941 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6"] Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.481958 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6"] Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.493278 5103 scope.go:117] "RemoveContainer" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.543291 5103 scope.go:117] "RemoveContainer" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.560357 5103 scope.go:117] "RemoveContainer" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.581953 5103 scope.go:117] "RemoveContainer" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.600456 5103 scope.go:117] "RemoveContainer" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.618490 5103 scope.go:117] "RemoveContainer" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.633533 5103 scope.go:117] "RemoveContainer" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.649582 5103 scope.go:117] "RemoveContainer" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.651330 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": container with ID starting with 2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06 not found: ID does not exist" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.652037 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} err="failed to get container status \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": rpc error: code = NotFound desc = could not find container \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": container with ID starting with 2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.652272 5103 scope.go:117] "RemoveContainer" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.655167 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": container with ID starting with 519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8 not found: ID does not exist" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.655224 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} err="failed to get container status \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": rpc error: code = NotFound desc = could not find container \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": container with ID starting with 519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.655254 5103 scope.go:117] "RemoveContainer" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.655861 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": container with ID starting with f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea not found: ID does not exist" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.655912 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} err="failed to get container status \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": rpc error: code = NotFound desc = could not find container \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": container with ID starting with f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.656161 5103 scope.go:117] "RemoveContainer" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.656650 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": container with ID starting with 2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087 not found: ID does not exist" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.656672 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} err="failed to get container status \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": rpc error: code = NotFound desc = could not find container \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": container with ID starting with 2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.656686 5103 scope.go:117] "RemoveContainer" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.657174 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": container with ID starting with 5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0 not found: ID does not exist" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.657200 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} err="failed to get container status \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": rpc error: code = NotFound desc = could not find container \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": container with ID starting with 5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.657214 5103 scope.go:117] "RemoveContainer" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.657681 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": container with ID starting with f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e not found: ID does not exist" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.657723 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} err="failed to get container status \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": rpc error: code = NotFound desc = could not find container \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": container with ID starting with f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.657751 5103 scope.go:117] "RemoveContainer" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.658391 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": container with ID starting with 7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b not found: ID does not exist" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.658412 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} err="failed to get container status \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": rpc error: code = NotFound desc = could not find container \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": container with ID starting with 7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.658427 5103 scope.go:117] "RemoveContainer" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.658854 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": container with ID starting with 531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c not found: ID does not exist" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.658877 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} err="failed to get container status \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": rpc error: code = NotFound desc = could not find container \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": container with ID starting with 531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.658895 5103 scope.go:117] "RemoveContainer" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.659138 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": container with ID starting with 2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da not found: ID does not exist" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.659162 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} err="failed to get container status \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": rpc error: code = NotFound desc = could not find container \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": container with ID starting with 2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.659177 5103 scope.go:117] "RemoveContainer" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.659511 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} err="failed to get container status \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": rpc error: code = NotFound desc = could not find container \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": container with ID starting with 2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.659533 5103 scope.go:117] "RemoveContainer" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660006 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} err="failed to get container status \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": rpc error: code = NotFound desc = could not find container \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": container with ID starting with 519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660041 5103 scope.go:117] "RemoveContainer" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660331 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} err="failed to get container status \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": rpc error: code = NotFound desc = could not find container \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": container with ID starting with f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660355 5103 scope.go:117] "RemoveContainer" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660611 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} err="failed to get container status \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": rpc error: code = NotFound desc = could not find container \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": container with ID starting with 2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660640 5103 scope.go:117] "RemoveContainer" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660870 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} err="failed to get container status \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": rpc error: code = NotFound desc = could not find container \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": container with ID starting with 5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660888 5103 scope.go:117] "RemoveContainer" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661067 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} err="failed to get container status \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": rpc error: code = NotFound desc = could not find container \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": container with ID starting with f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661090 5103 scope.go:117] "RemoveContainer" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661272 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} err="failed to get container status \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": rpc error: code = NotFound desc = could not find container \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": container with ID starting with 7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661294 5103 scope.go:117] "RemoveContainer" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661491 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} err="failed to get container status \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": rpc error: code = NotFound desc = could not find container \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": container with ID starting with 531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661511 5103 scope.go:117] "RemoveContainer" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661723 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} err="failed to get container status \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": rpc error: code = NotFound desc = could not find container \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": container with ID starting with 2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661766 5103 scope.go:117] "RemoveContainer" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661999 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} err="failed to get container status \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": rpc error: code = NotFound desc = could not find container \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": container with ID starting with 2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662021 5103 scope.go:117] "RemoveContainer" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662201 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} err="failed to get container status \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": rpc error: code = NotFound desc = could not find container \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": container with ID starting with 519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662250 5103 scope.go:117] "RemoveContainer" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662496 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} err="failed to get container status \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": rpc error: code = NotFound desc = could not find container \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": container with ID starting with f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662520 5103 scope.go:117] "RemoveContainer" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662749 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} err="failed to get container status \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": rpc error: code = NotFound desc = could not find container \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": container with ID starting with 2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662776 5103 scope.go:117] "RemoveContainer" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662981 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} err="failed to get container status \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": rpc error: code = NotFound desc = could not find container \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": container with ID starting with 5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663003 5103 scope.go:117] "RemoveContainer" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663258 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} err="failed to get container status \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": rpc error: code = NotFound desc = could not find container \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": container with ID starting with f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663276 5103 scope.go:117] "RemoveContainer" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663486 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} err="failed to get container status \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": rpc error: code = NotFound desc = could not find container \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": container with ID starting with 7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663499 5103 scope.go:117] "RemoveContainer" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663827 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} err="failed to get container status \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": rpc error: code = NotFound desc = could not find container \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": container with ID starting with 531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663849 5103 scope.go:117] "RemoveContainer" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.664678 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} err="failed to get container status \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": rpc error: code = NotFound desc = could not find container \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": container with ID starting with 2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.664703 5103 scope.go:117] "RemoveContainer" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.665029 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} err="failed to get container status \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": rpc error: code = NotFound desc = could not find container \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": container with ID starting with 2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.665064 5103 scope.go:117] "RemoveContainer" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.665345 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} err="failed to get container status \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": rpc error: code = NotFound desc = could not find container \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": container with ID starting with 519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.665380 5103 scope.go:117] "RemoveContainer" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.665758 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} err="failed to get container status \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": rpc error: code = NotFound desc = could not find container \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": container with ID starting with f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.665776 5103 scope.go:117] "RemoveContainer" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666026 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} err="failed to get container status \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": rpc error: code = NotFound desc = could not find container \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": container with ID starting with 2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666062 5103 scope.go:117] "RemoveContainer" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666430 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} err="failed to get container status \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": rpc error: code = NotFound desc = could not find container \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": container with ID starting with 5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666450 5103 scope.go:117] "RemoveContainer" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666700 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} err="failed to get container status \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": rpc error: code = NotFound desc = could not find container \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": container with ID starting with f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666719 5103 scope.go:117] "RemoveContainer" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666883 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} err="failed to get container status \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": rpc error: code = NotFound desc = could not find container \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": container with ID starting with 7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666899 5103 scope.go:117] "RemoveContainer" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667065 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} err="failed to get container status \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": rpc error: code = NotFound desc = could not find container \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": container with ID starting with 531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667085 5103 scope.go:117] "RemoveContainer" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667286 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} err="failed to get container status \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": rpc error: code = NotFound desc = could not find container \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": container with ID starting with 2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667302 5103 scope.go:117] "RemoveContainer" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667480 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} err="failed to get container status \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": rpc error: code = NotFound desc = could not find container \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": container with ID starting with 2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667496 5103 scope.go:117] "RemoveContainer" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667728 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} err="failed to get container status \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": rpc error: code = NotFound desc = could not find container \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": container with ID starting with 519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667742 5103 scope.go:117] "RemoveContainer" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667906 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} err="failed to get container status \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": rpc error: code = NotFound desc = could not find container \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": container with ID starting with f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667921 5103 scope.go:117] "RemoveContainer" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.668187 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} err="failed to get container status \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": rpc error: code = NotFound desc = could not find container \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": container with ID starting with 2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.668227 5103 scope.go:117] "RemoveContainer" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.668417 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} err="failed to get container status \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": rpc error: code = NotFound desc = could not find container \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": container with ID starting with 5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.668442 5103 scope.go:117] "RemoveContainer" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.668632 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} err="failed to get container status \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": rpc error: code = NotFound desc = could not find container \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": container with ID starting with f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e not found: ID does not exist" Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.444291 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" event={"ID":"fd448e3b-d40d-4a51-b124-8d2558cece6f","Type":"ContainerStarted","Data":"1c96f4b0c1dc88063fcdd170ca416360f8b21df2d89cb589689443128774a010"} Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.444380 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" event={"ID":"fd448e3b-d40d-4a51-b124-8d2558cece6f","Type":"ContainerStarted","Data":"3ff669de89dd11b86c8c6ade1f21eb1b843c4ec83d3c7a3bc086f0faf8f660c6"} Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.452403 5103 generic.go:358] "Generic (PLEG): container finished" podID="4f2eeeee-fabb-485c-b725-16a296f58c96" containerID="5480578daefef342b60440da2a8c82fa7379571f14bda252e4eacbdfce4267a0" exitCode=0 Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.452552 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerDied","Data":"5480578daefef342b60440da2a8c82fa7379571f14bda252e4eacbdfce4267a0"} Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.455441 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swfns_a7dd7e02-4357-4643-8c23-2fb57ba70405/kube-multus/0.log" Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.455649 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swfns" event={"ID":"a7dd7e02-4357-4643-8c23-2fb57ba70405","Type":"ContainerStarted","Data":"1c3b59e2cda1f03dc4a6b2af74a2dd9b717de4547f0c7cdd9d896b9db0816d37"} Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.513023 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" podStartSLOduration=2.513000987 podStartE2EDuration="2.513000987s" podCreationTimestamp="2026-01-30 00:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:23:36.470857476 +0000 UTC m=+806.342355558" watchObservedRunningTime="2026-01-30 00:23:36.513000987 +0000 UTC m=+806.384499049" Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.875953 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" path="/var/lib/kubelet/pods/7d918c96-a16b-4836-ac5a-83c3388f5468/volumes" Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.877382 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" path="/var/lib/kubelet/pods/b3efa2c9-9a52-46ea-b9ad-f708dd386e79/volumes" Jan 30 00:23:37 crc kubenswrapper[5103]: I0130 00:23:37.465597 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"a59c50deadb71ab7529cc5235f6ee78a3c451b9366c7c77494c25cc29398ddb0"} Jan 30 00:23:37 crc kubenswrapper[5103]: I0130 00:23:37.465675 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"b47f9ba5bf6e0ba2f24dd97c6fdd8582b956b05be899f6b7ea707a991e241426"} Jan 30 00:23:37 crc kubenswrapper[5103]: I0130 00:23:37.465704 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"9798f41598c26fb1d9c36d1f7f8062236cd58860a21e8d013597dfe6fc4f0428"} Jan 30 00:23:37 crc kubenswrapper[5103]: I0130 00:23:37.465727 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"d7e03f3bde92d63378ee648779340aa81bb05e0bbaf3a0c48063217217861704"} Jan 30 00:23:37 crc kubenswrapper[5103]: I0130 00:23:37.465750 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"4f0932da209b17bd45b79dab32b319588d7f4d5201dbe532ff9b9d0992d37a00"} Jan 30 00:23:37 crc kubenswrapper[5103]: I0130 00:23:37.465775 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"12c0b36156e64212436c1448eadd7f9d77ed9daba09018cfb0395e91e3dd6d81"} Jan 30 00:23:40 crc kubenswrapper[5103]: I0130 00:23:40.488820 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"1db3989ac62b8fa21c41ef8d83db7024b90e0a927a18b100cbb4b74ce8efb6ec"} Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.508362 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"d2cec7bfc7e0d791f3c154776f751f855d988fffd40f27993eb89f9c299868a4"} Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.508998 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.509016 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.509026 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.536843 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.539584 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.541583 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" podStartSLOduration=8.541566658 podStartE2EDuration="8.541566658s" podCreationTimestamp="2026-01-30 00:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:23:42.538882653 +0000 UTC m=+812.410380715" watchObservedRunningTime="2026-01-30 00:23:42.541566658 +0000 UTC m=+812.413064730" Jan 30 00:23:58 crc kubenswrapper[5103]: I0130 00:23:58.493032 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:23:58 crc kubenswrapper[5103]: I0130 00:23:58.494032 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:23:58 crc kubenswrapper[5103]: I0130 00:23:58.494148 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:23:58 crc kubenswrapper[5103]: I0130 00:23:58.495182 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3697ed82987151cb30b4fcbd0a44d2a69c948067ae7968004a45b0cf18254730"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:23:58 crc kubenswrapper[5103]: I0130 00:23:58.495308 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://3697ed82987151cb30b4fcbd0a44d2a69c948067ae7968004a45b0cf18254730" gracePeriod=600 Jan 30 00:23:59 crc kubenswrapper[5103]: I0130 00:23:59.635421 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="3697ed82987151cb30b4fcbd0a44d2a69c948067ae7968004a45b0cf18254730" exitCode=0 Jan 30 00:23:59 crc kubenswrapper[5103]: I0130 00:23:59.635519 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"3697ed82987151cb30b4fcbd0a44d2a69c948067ae7968004a45b0cf18254730"} Jan 30 00:23:59 crc kubenswrapper[5103]: I0130 00:23:59.637757 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"90100aee52a55e7e0a5d62a8eaebe5bce65c117790d10db4419165f35e2674a5"} Jan 30 00:23:59 crc kubenswrapper[5103]: I0130 00:23:59.637800 5103 scope.go:117] "RemoveContainer" containerID="399cda3a0f0aa765b5f32eacaf816dc8466c112e0b2d2cfeb27afa2df61ade76" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.144534 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kj6vw"] Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.151441 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.156622 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kj6vw"] Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.159334 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.160858 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.161012 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.257469 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfd28\" (UniqueName: \"kubernetes.io/projected/5ad58695-120d-466b-bec0-3198637da77d-kube-api-access-xfd28\") pod \"auto-csr-approver-29495544-kj6vw\" (UID: \"5ad58695-120d-466b-bec0-3198637da77d\") " pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.358840 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xfd28\" (UniqueName: \"kubernetes.io/projected/5ad58695-120d-466b-bec0-3198637da77d-kube-api-access-xfd28\") pod \"auto-csr-approver-29495544-kj6vw\" (UID: \"5ad58695-120d-466b-bec0-3198637da77d\") " pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.396356 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfd28\" (UniqueName: \"kubernetes.io/projected/5ad58695-120d-466b-bec0-3198637da77d-kube-api-access-xfd28\") pod \"auto-csr-approver-29495544-kj6vw\" (UID: \"5ad58695-120d-466b-bec0-3198637da77d\") " pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.469327 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.707164 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kj6vw"] Jan 30 00:24:00 crc kubenswrapper[5103]: W0130 00:24:00.716313 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ad58695_120d_466b_bec0_3198637da77d.slice/crio-7772c6906efa6b132de6eed89a2bcf6e9224bd50777e9266ced80722fb99c257 WatchSource:0}: Error finding container 7772c6906efa6b132de6eed89a2bcf6e9224bd50777e9266ced80722fb99c257: Status 404 returned error can't find the container with id 7772c6906efa6b132de6eed89a2bcf6e9224bd50777e9266ced80722fb99c257 Jan 30 00:24:01 crc kubenswrapper[5103]: I0130 00:24:01.654850 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" event={"ID":"5ad58695-120d-466b-bec0-3198637da77d","Type":"ContainerStarted","Data":"7772c6906efa6b132de6eed89a2bcf6e9224bd50777e9266ced80722fb99c257"} Jan 30 00:24:02 crc kubenswrapper[5103]: I0130 00:24:02.664396 5103 generic.go:358] "Generic (PLEG): container finished" podID="5ad58695-120d-466b-bec0-3198637da77d" containerID="cc6d50dd8cf2d79869118c21971c35ee57934965ea393fbb5dc64b460746ac0e" exitCode=0 Jan 30 00:24:02 crc kubenswrapper[5103]: I0130 00:24:02.664534 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" event={"ID":"5ad58695-120d-466b-bec0-3198637da77d","Type":"ContainerDied","Data":"cc6d50dd8cf2d79869118c21971c35ee57934965ea393fbb5dc64b460746ac0e"} Jan 30 00:24:03 crc kubenswrapper[5103]: I0130 00:24:03.985977 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:04 crc kubenswrapper[5103]: I0130 00:24:04.112187 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfd28\" (UniqueName: \"kubernetes.io/projected/5ad58695-120d-466b-bec0-3198637da77d-kube-api-access-xfd28\") pod \"5ad58695-120d-466b-bec0-3198637da77d\" (UID: \"5ad58695-120d-466b-bec0-3198637da77d\") " Jan 30 00:24:04 crc kubenswrapper[5103]: I0130 00:24:04.121432 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad58695-120d-466b-bec0-3198637da77d-kube-api-access-xfd28" (OuterVolumeSpecName: "kube-api-access-xfd28") pod "5ad58695-120d-466b-bec0-3198637da77d" (UID: "5ad58695-120d-466b-bec0-3198637da77d"). InnerVolumeSpecName "kube-api-access-xfd28". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:04 crc kubenswrapper[5103]: I0130 00:24:04.215351 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfd28\" (UniqueName: \"kubernetes.io/projected/5ad58695-120d-466b-bec0-3198637da77d-kube-api-access-xfd28\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:04 crc kubenswrapper[5103]: I0130 00:24:04.680600 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" event={"ID":"5ad58695-120d-466b-bec0-3198637da77d","Type":"ContainerDied","Data":"7772c6906efa6b132de6eed89a2bcf6e9224bd50777e9266ced80722fb99c257"} Jan 30 00:24:04 crc kubenswrapper[5103]: I0130 00:24:04.680650 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:04 crc kubenswrapper[5103]: I0130 00:24:04.680669 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7772c6906efa6b132de6eed89a2bcf6e9224bd50777e9266ced80722fb99c257" Jan 30 00:24:11 crc kubenswrapper[5103]: I0130 00:24:11.712086 5103 scope.go:117] "RemoveContainer" containerID="6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed" Jan 30 00:24:11 crc kubenswrapper[5103]: I0130 00:24:11.753195 5103 scope.go:117] "RemoveContainer" containerID="031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32" Jan 30 00:24:14 crc kubenswrapper[5103]: I0130 00:24:14.560244 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.456585 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cdjcm"] Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.461531 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ad58695-120d-466b-bec0-3198637da77d" containerName="oc" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.461562 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad58695-120d-466b-bec0-3198637da77d" containerName="oc" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.461720 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ad58695-120d-466b-bec0-3198637da77d" containerName="oc" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.635860 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cdjcm"] Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.636010 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.703003 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7q6f\" (UniqueName: \"kubernetes.io/projected/41134658-93eb-415b-b6ac-9d0a73083d6a-kube-api-access-l7q6f\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.703091 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-utilities\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.703309 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-catalog-content\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.804309 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7q6f\" (UniqueName: \"kubernetes.io/projected/41134658-93eb-415b-b6ac-9d0a73083d6a-kube-api-access-l7q6f\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.804377 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-utilities\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.804415 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-catalog-content\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.805178 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-utilities\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.805189 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-catalog-content\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.840523 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7q6f\" (UniqueName: \"kubernetes.io/projected/41134658-93eb-415b-b6ac-9d0a73083d6a-kube-api-access-l7q6f\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.950598 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:30 crc kubenswrapper[5103]: I0130 00:24:30.195590 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cdjcm"] Jan 30 00:24:30 crc kubenswrapper[5103]: I0130 00:24:30.866231 5103 generic.go:358] "Generic (PLEG): container finished" podID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerID="8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8" exitCode=0 Jan 30 00:24:30 crc kubenswrapper[5103]: I0130 00:24:30.866509 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerDied","Data":"8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8"} Jan 30 00:24:30 crc kubenswrapper[5103]: I0130 00:24:30.867258 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerStarted","Data":"e98b3f20b6b07b3b4311f68953e87f12a1d1c55d1f4c76c02c7e9c2872921338"} Jan 30 00:24:31 crc kubenswrapper[5103]: I0130 00:24:31.876872 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerStarted","Data":"c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e"} Jan 30 00:24:32 crc kubenswrapper[5103]: I0130 00:24:32.884704 5103 generic.go:358] "Generic (PLEG): container finished" podID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerID="c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e" exitCode=0 Jan 30 00:24:32 crc kubenswrapper[5103]: I0130 00:24:32.884835 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerDied","Data":"c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e"} Jan 30 00:24:33 crc kubenswrapper[5103]: I0130 00:24:33.902661 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerStarted","Data":"fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7"} Jan 30 00:24:33 crc kubenswrapper[5103]: I0130 00:24:33.933559 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cdjcm" podStartSLOduration=4.380336716 podStartE2EDuration="4.9335355s" podCreationTimestamp="2026-01-30 00:24:29 +0000 UTC" firstStartedPulling="2026-01-30 00:24:30.868039029 +0000 UTC m=+860.739537121" lastFinishedPulling="2026-01-30 00:24:31.421237813 +0000 UTC m=+861.292735905" observedRunningTime="2026-01-30 00:24:33.930230019 +0000 UTC m=+863.801728141" watchObservedRunningTime="2026-01-30 00:24:33.9335355 +0000 UTC m=+863.805033562" Jan 30 00:24:39 crc kubenswrapper[5103]: I0130 00:24:39.951356 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:39 crc kubenswrapper[5103]: I0130 00:24:39.951734 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:40 crc kubenswrapper[5103]: I0130 00:24:40.017131 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:40 crc kubenswrapper[5103]: I0130 00:24:40.999724 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:41 crc kubenswrapper[5103]: I0130 00:24:41.058773 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cdjcm"] Jan 30 00:24:42 crc kubenswrapper[5103]: I0130 00:24:42.962861 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cdjcm" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="registry-server" containerID="cri-o://fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7" gracePeriod=2 Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.110866 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-29m6m"] Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.111908 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-29m6m" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="registry-server" containerID="cri-o://445648755df7aa746d13412d63bc4c92d3a18d86920e1a4192ac33176f6aa9d6" gracePeriod=30 Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.910473 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.970950 5103 generic.go:358] "Generic (PLEG): container finished" podID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerID="fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7" exitCode=0 Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.971168 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerDied","Data":"fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7"} Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.971219 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerDied","Data":"e98b3f20b6b07b3b4311f68953e87f12a1d1c55d1f4c76c02c7e9c2872921338"} Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.971176 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.971243 5103 scope.go:117] "RemoveContainer" containerID="fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7" Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.976078 5103 generic.go:358] "Generic (PLEG): container finished" podID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerID="445648755df7aa746d13412d63bc4c92d3a18d86920e1a4192ac33176f6aa9d6" exitCode=0 Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.976220 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29m6m" event={"ID":"3c68a080-5bee-4c96-8683-dfbc9187c20f","Type":"ContainerDied","Data":"445648755df7aa746d13412d63bc4c92d3a18d86920e1a4192ac33176f6aa9d6"} Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.001953 5103 scope.go:117] "RemoveContainer" containerID="c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.025430 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7q6f\" (UniqueName: \"kubernetes.io/projected/41134658-93eb-415b-b6ac-9d0a73083d6a-kube-api-access-l7q6f\") pod \"41134658-93eb-415b-b6ac-9d0a73083d6a\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.025552 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-catalog-content\") pod \"41134658-93eb-415b-b6ac-9d0a73083d6a\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.025651 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-utilities\") pod \"41134658-93eb-415b-b6ac-9d0a73083d6a\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.027363 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-utilities" (OuterVolumeSpecName: "utilities") pod "41134658-93eb-415b-b6ac-9d0a73083d6a" (UID: "41134658-93eb-415b-b6ac-9d0a73083d6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.028515 5103 scope.go:117] "RemoveContainer" containerID="8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.032015 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41134658-93eb-415b-b6ac-9d0a73083d6a-kube-api-access-l7q6f" (OuterVolumeSpecName: "kube-api-access-l7q6f") pod "41134658-93eb-415b-b6ac-9d0a73083d6a" (UID: "41134658-93eb-415b-b6ac-9d0a73083d6a"). InnerVolumeSpecName "kube-api-access-l7q6f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.046503 5103 scope.go:117] "RemoveContainer" containerID="fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7" Jan 30 00:24:44 crc kubenswrapper[5103]: E0130 00:24:44.046875 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7\": container with ID starting with fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7 not found: ID does not exist" containerID="fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.046914 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7"} err="failed to get container status \"fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7\": rpc error: code = NotFound desc = could not find container \"fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7\": container with ID starting with fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7 not found: ID does not exist" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.046938 5103 scope.go:117] "RemoveContainer" containerID="c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e" Jan 30 00:24:44 crc kubenswrapper[5103]: E0130 00:24:44.047203 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e\": container with ID starting with c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e not found: ID does not exist" containerID="c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.047231 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e"} err="failed to get container status \"c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e\": rpc error: code = NotFound desc = could not find container \"c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e\": container with ID starting with c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e not found: ID does not exist" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.047251 5103 scope.go:117] "RemoveContainer" containerID="8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8" Jan 30 00:24:44 crc kubenswrapper[5103]: E0130 00:24:44.047500 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8\": container with ID starting with 8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8 not found: ID does not exist" containerID="8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.047525 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8"} err="failed to get container status \"8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8\": rpc error: code = NotFound desc = could not find container \"8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8\": container with ID starting with 8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8 not found: ID does not exist" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.079002 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41134658-93eb-415b-b6ac-9d0a73083d6a" (UID: "41134658-93eb-415b-b6ac-9d0a73083d6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.127635 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.127667 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.127676 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l7q6f\" (UniqueName: \"kubernetes.io/projected/41134658-93eb-415b-b6ac-9d0a73083d6a-kube-api-access-l7q6f\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.170719 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.228961 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-catalog-content\") pod \"3c68a080-5bee-4c96-8683-dfbc9187c20f\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.229147 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-utilities\") pod \"3c68a080-5bee-4c96-8683-dfbc9187c20f\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.229217 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wk6p\" (UniqueName: \"kubernetes.io/projected/3c68a080-5bee-4c96-8683-dfbc9187c20f-kube-api-access-6wk6p\") pod \"3c68a080-5bee-4c96-8683-dfbc9187c20f\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.230460 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-utilities" (OuterVolumeSpecName: "utilities") pod "3c68a080-5bee-4c96-8683-dfbc9187c20f" (UID: "3c68a080-5bee-4c96-8683-dfbc9187c20f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.234350 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c68a080-5bee-4c96-8683-dfbc9187c20f-kube-api-access-6wk6p" (OuterVolumeSpecName: "kube-api-access-6wk6p") pod "3c68a080-5bee-4c96-8683-dfbc9187c20f" (UID: "3c68a080-5bee-4c96-8683-dfbc9187c20f"). InnerVolumeSpecName "kube-api-access-6wk6p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.257133 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c68a080-5bee-4c96-8683-dfbc9187c20f" (UID: "3c68a080-5bee-4c96-8683-dfbc9187c20f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.304613 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cdjcm"] Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.313550 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cdjcm"] Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.330648 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6wk6p\" (UniqueName: \"kubernetes.io/projected/3c68a080-5bee-4c96-8683-dfbc9187c20f-kube-api-access-6wk6p\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.330674 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.330684 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.875026 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" path="/var/lib/kubelet/pods/41134658-93eb-415b-b6ac-9d0a73083d6a/volumes" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.990628 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29m6m" event={"ID":"3c68a080-5bee-4c96-8683-dfbc9187c20f","Type":"ContainerDied","Data":"93eda4d031aed494c523ca77f1e91f142fd42bf9c41c24e7b6cc12d812375e6e"} Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.990716 5103 scope.go:117] "RemoveContainer" containerID="445648755df7aa746d13412d63bc4c92d3a18d86920e1a4192ac33176f6aa9d6" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.990787 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:24:45 crc kubenswrapper[5103]: I0130 00:24:45.021116 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-29m6m"] Jan 30 00:24:45 crc kubenswrapper[5103]: I0130 00:24:45.027324 5103 scope.go:117] "RemoveContainer" containerID="e28c324607a0aa3b715230dc818fcdca18f72d1a3d44777010087b06d0384ded" Jan 30 00:24:45 crc kubenswrapper[5103]: I0130 00:24:45.029219 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-29m6m"] Jan 30 00:24:45 crc kubenswrapper[5103]: I0130 00:24:45.044360 5103 scope.go:117] "RemoveContainer" containerID="44a4d6d1f7b80ae12b217c95d1dbfec630c58aa07e5059535d601fbdbef544c4" Jan 30 00:24:46 crc kubenswrapper[5103]: I0130 00:24:46.874694 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" path="/var/lib/kubelet/pods/3c68a080-5bee-4c96-8683-dfbc9187c20f/volumes" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.819847 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj"] Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821671 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="registry-server" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821704 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="registry-server" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821729 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="registry-server" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821741 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="registry-server" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821757 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="extract-utilities" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821771 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="extract-utilities" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821814 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="extract-utilities" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821826 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="extract-utilities" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821846 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="extract-content" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821859 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="extract-content" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821874 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="extract-content" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821886 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="extract-content" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.822096 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="registry-server" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.822124 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="registry-server" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.832715 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj"] Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.832920 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.840635 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.891520 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.891691 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.891723 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7x2x\" (UniqueName: \"kubernetes.io/projected/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-kube-api-access-x7x2x\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.993954 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.994234 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.994307 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x7x2x\" (UniqueName: \"kubernetes.io/projected/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-kube-api-access-x7x2x\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.995019 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.995020 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:49 crc kubenswrapper[5103]: I0130 00:24:49.053110 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7x2x\" (UniqueName: \"kubernetes.io/projected/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-kube-api-access-x7x2x\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:49 crc kubenswrapper[5103]: I0130 00:24:49.160477 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:49 crc kubenswrapper[5103]: I0130 00:24:49.432628 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj"] Jan 30 00:24:50 crc kubenswrapper[5103]: I0130 00:24:50.044751 5103 generic.go:358] "Generic (PLEG): container finished" podID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerID="6eac2d9439de32ff558430295827cf834d20212b102dcb1cbad169ab2ebd4e6b" exitCode=0 Jan 30 00:24:50 crc kubenswrapper[5103]: I0130 00:24:50.045210 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" event={"ID":"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e","Type":"ContainerDied","Data":"6eac2d9439de32ff558430295827cf834d20212b102dcb1cbad169ab2ebd4e6b"} Jan 30 00:24:50 crc kubenswrapper[5103]: I0130 00:24:50.045276 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" event={"ID":"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e","Type":"ContainerStarted","Data":"6c4c13447d1abea83e5e39c862df4c70f05d3b0ffc7b4b85c3f136e1edd83444"} Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.062799 5103 generic.go:358] "Generic (PLEG): container finished" podID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerID="7df1ea0921bac385d4e348342f6a864cf1c38f8272c1fa6d930dec98940f8ec8" exitCode=0 Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.062901 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" event={"ID":"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e","Type":"ContainerDied","Data":"7df1ea0921bac385d4e348342f6a864cf1c38f8272c1fa6d930dec98940f8ec8"} Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.368842 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nv4qh"] Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.380616 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.385307 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nv4qh"] Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.447703 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-catalog-content\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.447895 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vnkw\" (UniqueName: \"kubernetes.io/projected/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-kube-api-access-6vnkw\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.448015 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-utilities\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.548691 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-catalog-content\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.548946 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6vnkw\" (UniqueName: \"kubernetes.io/projected/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-kube-api-access-6vnkw\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.548982 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-utilities\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.549392 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-utilities\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.549863 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-catalog-content\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.573060 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vnkw\" (UniqueName: \"kubernetes.io/projected/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-kube-api-access-6vnkw\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.697568 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.879987 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nv4qh"] Jan 30 00:24:53 crc kubenswrapper[5103]: I0130 00:24:53.069211 5103 generic.go:358] "Generic (PLEG): container finished" podID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerID="aef55cb05bf11143f58f8e0b8e055586faee2995d04fc7874acb3b506132512f" exitCode=0 Jan 30 00:24:53 crc kubenswrapper[5103]: I0130 00:24:53.069288 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" event={"ID":"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e","Type":"ContainerDied","Data":"aef55cb05bf11143f58f8e0b8e055586faee2995d04fc7874acb3b506132512f"} Jan 30 00:24:53 crc kubenswrapper[5103]: I0130 00:24:53.070810 5103 generic.go:358] "Generic (PLEG): container finished" podID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerID="d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89" exitCode=0 Jan 30 00:24:53 crc kubenswrapper[5103]: I0130 00:24:53.070928 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerDied","Data":"d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89"} Jan 30 00:24:53 crc kubenswrapper[5103]: I0130 00:24:53.070955 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerStarted","Data":"025adefaf8142ce355a7aa90e0f2747b128b7c7fc3858dd12fcfec2adb94ac75"} Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.090798 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerStarted","Data":"cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75"} Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.296400 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.371549 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-util\") pod \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.371689 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-bundle\") pod \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.371715 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7x2x\" (UniqueName: \"kubernetes.io/projected/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-kube-api-access-x7x2x\") pod \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.373829 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-bundle" (OuterVolumeSpecName: "bundle") pod "34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" (UID: "34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.385227 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-kube-api-access-x7x2x" (OuterVolumeSpecName: "kube-api-access-x7x2x") pod "34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" (UID: "34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e"). InnerVolumeSpecName "kube-api-access-x7x2x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.385388 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-util" (OuterVolumeSpecName: "util") pod "34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" (UID: "34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.472675 5103 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.472708 5103 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.472717 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x7x2x\" (UniqueName: \"kubernetes.io/projected/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-kube-api-access-x7x2x\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:55 crc kubenswrapper[5103]: I0130 00:24:55.101222 5103 generic.go:358] "Generic (PLEG): container finished" podID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerID="cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75" exitCode=0 Jan 30 00:24:55 crc kubenswrapper[5103]: I0130 00:24:55.101316 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerDied","Data":"cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75"} Jan 30 00:24:55 crc kubenswrapper[5103]: I0130 00:24:55.107422 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" event={"ID":"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e","Type":"ContainerDied","Data":"6c4c13447d1abea83e5e39c862df4c70f05d3b0ffc7b4b85c3f136e1edd83444"} Jan 30 00:24:55 crc kubenswrapper[5103]: I0130 00:24:55.107506 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c4c13447d1abea83e5e39c862df4c70f05d3b0ffc7b4b85c3f136e1edd83444" Jan 30 00:24:55 crc kubenswrapper[5103]: I0130 00:24:55.107453 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015031 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc"] Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015647 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="util" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015666 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="util" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015678 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="pull" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015685 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="pull" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015704 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="extract" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015713 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="extract" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015821 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="extract" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.027259 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc"] Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.027413 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.029787 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.093895 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.094178 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.094291 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tdf\" (UniqueName: \"kubernetes.io/projected/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-kube-api-access-r4tdf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.114369 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerStarted","Data":"305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8"} Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.138033 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nv4qh" podStartSLOduration=3.460727133 podStartE2EDuration="4.138012092s" podCreationTimestamp="2026-01-30 00:24:52 +0000 UTC" firstStartedPulling="2026-01-30 00:24:53.071596788 +0000 UTC m=+882.943094840" lastFinishedPulling="2026-01-30 00:24:53.748881747 +0000 UTC m=+883.620379799" observedRunningTime="2026-01-30 00:24:56.13183446 +0000 UTC m=+886.003332532" watchObservedRunningTime="2026-01-30 00:24:56.138012092 +0000 UTC m=+886.009510154" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.195922 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r4tdf\" (UniqueName: \"kubernetes.io/projected/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-kube-api-access-r4tdf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.196086 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.196196 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.197327 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.197671 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.234287 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4tdf\" (UniqueName: \"kubernetes.io/projected/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-kube-api-access-r4tdf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.350422 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.590256 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc"] Jan 30 00:24:56 crc kubenswrapper[5103]: W0130 00:24:56.600108 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod969009ac_f9ae_48c0_b45e_bf9a5844b7ff.slice/crio-162c3712c0567f536e54e392adcf51d6a5e7ae9780c2a1c1bf9b26bb945576ed WatchSource:0}: Error finding container 162c3712c0567f536e54e392adcf51d6a5e7ae9780c2a1c1bf9b26bb945576ed: Status 404 returned error can't find the container with id 162c3712c0567f536e54e392adcf51d6a5e7ae9780c2a1c1bf9b26bb945576ed Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.122989 5103 generic.go:358] "Generic (PLEG): container finished" podID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" containerID="1ad6d275bb45cd7dd78c0284bea8eeb19469eca12ddc818acd7996f928a2d92e" exitCode=0 Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.123236 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" event={"ID":"969009ac-f9ae-48c0-b45e-bf9a5844b7ff","Type":"ContainerDied","Data":"1ad6d275bb45cd7dd78c0284bea8eeb19469eca12ddc818acd7996f928a2d92e"} Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.123320 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" event={"ID":"969009ac-f9ae-48c0-b45e-bf9a5844b7ff","Type":"ContainerStarted","Data":"162c3712c0567f536e54e392adcf51d6a5e7ae9780c2a1c1bf9b26bb945576ed"} Jan 30 00:24:57 crc kubenswrapper[5103]: E0130 00:24:57.363830 5103 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:24:57 crc kubenswrapper[5103]: E0130 00:24:57.364237 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_openshift-marketplace(969009ac-f9ae-48c0-b45e-bf9a5844b7ff): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:24:57 crc kubenswrapper[5103]: E0130 00:24:57.365477 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.401332 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg"] Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.405757 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.413310 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg"] Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.513273 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.513328 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs5v9\" (UniqueName: \"kubernetes.io/projected/b1decb0e-49d8-404d-966d-b8249754982f-kube-api-access-bs5v9\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.513403 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.614678 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.614854 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.614919 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bs5v9\" (UniqueName: \"kubernetes.io/projected/b1decb0e-49d8-404d-966d-b8249754982f-kube-api-access-bs5v9\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.615831 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.616020 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.639208 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs5v9\" (UniqueName: \"kubernetes.io/projected/b1decb0e-49d8-404d-966d-b8249754982f-kube-api-access-bs5v9\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.721612 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.953480 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg"] Jan 30 00:24:57 crc kubenswrapper[5103]: W0130 00:24:57.963274 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1decb0e_49d8_404d_966d_b8249754982f.slice/crio-d2add8cc90da4c85a774a104509a8c88fc2b9ac09cbfa8a920227e4cb1710844 WatchSource:0}: Error finding container d2add8cc90da4c85a774a104509a8c88fc2b9ac09cbfa8a920227e4cb1710844: Status 404 returned error can't find the container with id d2add8cc90da4c85a774a104509a8c88fc2b9ac09cbfa8a920227e4cb1710844 Jan 30 00:24:58 crc kubenswrapper[5103]: I0130 00:24:58.129070 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" event={"ID":"b1decb0e-49d8-404d-966d-b8249754982f","Type":"ContainerStarted","Data":"34ee1b14394c9fb1a3cb58d8e258a9f8b5b6440432110fd6bb5f2c26f0b50abc"} Jan 30 00:24:58 crc kubenswrapper[5103]: I0130 00:24:58.130475 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" event={"ID":"b1decb0e-49d8-404d-966d-b8249754982f","Type":"ContainerStarted","Data":"d2add8cc90da4c85a774a104509a8c88fc2b9ac09cbfa8a920227e4cb1710844"} Jan 30 00:24:58 crc kubenswrapper[5103]: E0130 00:24:58.131841 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:24:59 crc kubenswrapper[5103]: I0130 00:24:59.138243 5103 generic.go:358] "Generic (PLEG): container finished" podID="b1decb0e-49d8-404d-966d-b8249754982f" containerID="34ee1b14394c9fb1a3cb58d8e258a9f8b5b6440432110fd6bb5f2c26f0b50abc" exitCode=0 Jan 30 00:24:59 crc kubenswrapper[5103]: I0130 00:24:59.138312 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" event={"ID":"b1decb0e-49d8-404d-966d-b8249754982f","Type":"ContainerDied","Data":"34ee1b14394c9fb1a3cb58d8e258a9f8b5b6440432110fd6bb5f2c26f0b50abc"} Jan 30 00:25:02 crc kubenswrapper[5103]: I0130 00:25:02.698173 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:25:02 crc kubenswrapper[5103]: I0130 00:25:02.698223 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:25:02 crc kubenswrapper[5103]: I0130 00:25:02.774538 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:25:03 crc kubenswrapper[5103]: I0130 00:25:03.226868 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:25:04 crc kubenswrapper[5103]: I0130 00:25:04.349656 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nv4qh"] Jan 30 00:25:05 crc kubenswrapper[5103]: I0130 00:25:05.172393 5103 generic.go:358] "Generic (PLEG): container finished" podID="b1decb0e-49d8-404d-966d-b8249754982f" containerID="ad869cf96bc9dddcccbe1599fb46df8956db41181ed889ba6b3358c30d513e6f" exitCode=0 Jan 30 00:25:05 crc kubenswrapper[5103]: I0130 00:25:05.172454 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" event={"ID":"b1decb0e-49d8-404d-966d-b8249754982f","Type":"ContainerDied","Data":"ad869cf96bc9dddcccbe1599fb46df8956db41181ed889ba6b3358c30d513e6f"} Jan 30 00:25:05 crc kubenswrapper[5103]: I0130 00:25:05.173127 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nv4qh" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="registry-server" containerID="cri-o://305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8" gracePeriod=2 Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.034435 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.142610 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-utilities\") pod \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.142729 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-catalog-content\") pod \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.142802 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vnkw\" (UniqueName: \"kubernetes.io/projected/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-kube-api-access-6vnkw\") pod \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.143502 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-utilities" (OuterVolumeSpecName: "utilities") pod "cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" (UID: "cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.162119 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-kube-api-access-6vnkw" (OuterVolumeSpecName: "kube-api-access-6vnkw") pod "cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" (UID: "cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21"). InnerVolumeSpecName "kube-api-access-6vnkw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.183610 5103 generic.go:358] "Generic (PLEG): container finished" podID="b1decb0e-49d8-404d-966d-b8249754982f" containerID="bb60d5dd31f94af481701fffe7f1bd08115c8eff923b0dbe231d6e93cf2d86ce" exitCode=0 Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.183691 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" event={"ID":"b1decb0e-49d8-404d-966d-b8249754982f","Type":"ContainerDied","Data":"bb60d5dd31f94af481701fffe7f1bd08115c8eff923b0dbe231d6e93cf2d86ce"} Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.185743 5103 generic.go:358] "Generic (PLEG): container finished" podID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerID="305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8" exitCode=0 Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.185896 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerDied","Data":"305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8"} Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.185917 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerDied","Data":"025adefaf8142ce355a7aa90e0f2747b128b7c7fc3858dd12fcfec2adb94ac75"} Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.185934 5103 scope.go:117] "RemoveContainer" containerID="305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.186101 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.212224 5103 scope.go:117] "RemoveContainer" containerID="cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.232267 5103 scope.go:117] "RemoveContainer" containerID="d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.244068 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6vnkw\" (UniqueName: \"kubernetes.io/projected/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-kube-api-access-6vnkw\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.244099 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.254456 5103 scope.go:117] "RemoveContainer" containerID="305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8" Jan 30 00:25:06 crc kubenswrapper[5103]: E0130 00:25:06.254821 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8\": container with ID starting with 305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8 not found: ID does not exist" containerID="305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.254855 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8"} err="failed to get container status \"305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8\": rpc error: code = NotFound desc = could not find container \"305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8\": container with ID starting with 305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8 not found: ID does not exist" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.254879 5103 scope.go:117] "RemoveContainer" containerID="cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75" Jan 30 00:25:06 crc kubenswrapper[5103]: E0130 00:25:06.255129 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75\": container with ID starting with cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75 not found: ID does not exist" containerID="cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.255153 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75"} err="failed to get container status \"cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75\": rpc error: code = NotFound desc = could not find container \"cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75\": container with ID starting with cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75 not found: ID does not exist" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.255179 5103 scope.go:117] "RemoveContainer" containerID="d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89" Jan 30 00:25:06 crc kubenswrapper[5103]: E0130 00:25:06.255366 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89\": container with ID starting with d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89 not found: ID does not exist" containerID="d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.255384 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89"} err="failed to get container status \"d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89\": rpc error: code = NotFound desc = could not find container \"d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89\": container with ID starting with d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89 not found: ID does not exist" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.274811 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" (UID: "cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.345210 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.517101 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nv4qh"] Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.521121 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nv4qh"] Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.875363 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" path="/var/lib/kubelet/pods/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21/volumes" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.994897 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75"] Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995629 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="extract-content" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995651 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="extract-content" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995696 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="extract-utilities" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995705 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="extract-utilities" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995717 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="registry-server" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995725 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="registry-server" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995845 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="registry-server" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.011042 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.011212 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.013816 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.016572 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.017688 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-jdjvr\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.130923 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.135376 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.139532 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.139671 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-xn4jj\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.144078 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.147648 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.147978 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.153107 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhkb5\" (UniqueName: \"kubernetes.io/projected/957968da-8046-4a89-91ac-ecb8c0e83e85-kube-api-access-xhkb5\") pod \"obo-prometheus-operator-9bc85b4bf-mmf75\" (UID: \"957968da-8046-4a89-91ac-ecb8c0e83e85\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.170436 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.254734 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/888a411a-eaa9-4b4f-877b-0653ce686e73-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf\" (UID: \"888a411a-eaa9-4b4f-877b-0653ce686e73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.254797 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/888a411a-eaa9-4b4f-877b-0653ce686e73-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf\" (UID: \"888a411a-eaa9-4b4f-877b-0653ce686e73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.254868 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp\" (UID: \"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.255013 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xhkb5\" (UniqueName: \"kubernetes.io/projected/957968da-8046-4a89-91ac-ecb8c0e83e85-kube-api-access-xhkb5\") pod \"obo-prometheus-operator-9bc85b4bf-mmf75\" (UID: \"957968da-8046-4a89-91ac-ecb8c0e83e85\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.255073 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp\" (UID: \"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.277111 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhkb5\" (UniqueName: \"kubernetes.io/projected/957968da-8046-4a89-91ac-ecb8c0e83e85-kube-api-access-xhkb5\") pod \"obo-prometheus-operator-9bc85b4bf-mmf75\" (UID: \"957968da-8046-4a89-91ac-ecb8c0e83e85\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.320098 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-jcs7p"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.325399 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.325497 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.326583 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-jcs7p"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.327309 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-8trv4\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.327575 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.359686 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp\" (UID: \"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.359749 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/888a411a-eaa9-4b4f-877b-0653ce686e73-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf\" (UID: \"888a411a-eaa9-4b4f-877b-0653ce686e73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.359782 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/888a411a-eaa9-4b4f-877b-0653ce686e73-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf\" (UID: \"888a411a-eaa9-4b4f-877b-0653ce686e73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.359845 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp\" (UID: \"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.363814 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp\" (UID: \"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.367480 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/888a411a-eaa9-4b4f-877b-0653ce686e73-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf\" (UID: \"888a411a-eaa9-4b4f-877b-0653ce686e73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.370625 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp\" (UID: \"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.378563 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/888a411a-eaa9-4b4f-877b-0653ce686e73-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf\" (UID: \"888a411a-eaa9-4b4f-877b-0653ce686e73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.453796 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.460642 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g88fl\" (UniqueName: \"kubernetes.io/projected/e7d2bde2-5437-4672-b6b6-f2babe73dff0-kube-api-access-g88fl\") pod \"observability-operator-85c68dddb-jcs7p\" (UID: \"e7d2bde2-5437-4672-b6b6-f2babe73dff0\") " pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.460700 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7d2bde2-5437-4672-b6b6-f2babe73dff0-observability-operator-tls\") pod \"observability-operator-85c68dddb-jcs7p\" (UID: \"e7d2bde2-5437-4672-b6b6-f2babe73dff0\") " pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.462897 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.475645 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.531201 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-5r6dq"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533003 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="extract" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533016 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="extract" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533059 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="util" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533065 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="util" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533076 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="pull" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533081 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="pull" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533166 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="extract" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.561877 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-5r6dq"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.562020 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.562946 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-bundle\") pod \"b1decb0e-49d8-404d-966d-b8249754982f\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.562994 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs5v9\" (UniqueName: \"kubernetes.io/projected/b1decb0e-49d8-404d-966d-b8249754982f-kube-api-access-bs5v9\") pod \"b1decb0e-49d8-404d-966d-b8249754982f\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.563180 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-util\") pod \"b1decb0e-49d8-404d-966d-b8249754982f\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.563403 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7d2bde2-5437-4672-b6b6-f2babe73dff0-observability-operator-tls\") pod \"observability-operator-85c68dddb-jcs7p\" (UID: \"e7d2bde2-5437-4672-b6b6-f2babe73dff0\") " pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.563486 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g88fl\" (UniqueName: \"kubernetes.io/projected/e7d2bde2-5437-4672-b6b6-f2babe73dff0-kube-api-access-g88fl\") pod \"observability-operator-85c68dddb-jcs7p\" (UID: \"e7d2bde2-5437-4672-b6b6-f2babe73dff0\") " pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.566112 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-4p58v\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.569447 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-bundle" (OuterVolumeSpecName: "bundle") pod "b1decb0e-49d8-404d-966d-b8249754982f" (UID: "b1decb0e-49d8-404d-966d-b8249754982f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.569890 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7d2bde2-5437-4672-b6b6-f2babe73dff0-observability-operator-tls\") pod \"observability-operator-85c68dddb-jcs7p\" (UID: \"e7d2bde2-5437-4672-b6b6-f2babe73dff0\") " pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.570082 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1decb0e-49d8-404d-966d-b8249754982f-kube-api-access-bs5v9" (OuterVolumeSpecName: "kube-api-access-bs5v9") pod "b1decb0e-49d8-404d-966d-b8249754982f" (UID: "b1decb0e-49d8-404d-966d-b8249754982f"). InnerVolumeSpecName "kube-api-access-bs5v9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.597102 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-util" (OuterVolumeSpecName: "util") pod "b1decb0e-49d8-404d-966d-b8249754982f" (UID: "b1decb0e-49d8-404d-966d-b8249754982f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.600511 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g88fl\" (UniqueName: \"kubernetes.io/projected/e7d2bde2-5437-4672-b6b6-f2babe73dff0-kube-api-access-g88fl\") pod \"observability-operator-85c68dddb-jcs7p\" (UID: \"e7d2bde2-5437-4672-b6b6-f2babe73dff0\") " pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.643475 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.664495 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c7fdb9f-be0e-428a-88e1-283c31de8ad1-openshift-service-ca\") pod \"perses-operator-669c9f96b5-5r6dq\" (UID: \"8c7fdb9f-be0e-428a-88e1-283c31de8ad1\") " pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.664542 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tcrv\" (UniqueName: \"kubernetes.io/projected/8c7fdb9f-be0e-428a-88e1-283c31de8ad1-kube-api-access-6tcrv\") pod \"perses-operator-669c9f96b5-5r6dq\" (UID: \"8c7fdb9f-be0e-428a-88e1-283c31de8ad1\") " pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.664675 5103 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.664687 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bs5v9\" (UniqueName: \"kubernetes.io/projected/b1decb0e-49d8-404d-966d-b8249754982f-kube-api-access-bs5v9\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.664696 5103 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.665579 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.760161 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.767094 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c7fdb9f-be0e-428a-88e1-283c31de8ad1-openshift-service-ca\") pod \"perses-operator-669c9f96b5-5r6dq\" (UID: \"8c7fdb9f-be0e-428a-88e1-283c31de8ad1\") " pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.767142 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6tcrv\" (UniqueName: \"kubernetes.io/projected/8c7fdb9f-be0e-428a-88e1-283c31de8ad1-kube-api-access-6tcrv\") pod \"perses-operator-669c9f96b5-5r6dq\" (UID: \"8c7fdb9f-be0e-428a-88e1-283c31de8ad1\") " pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.768233 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c7fdb9f-be0e-428a-88e1-283c31de8ad1-openshift-service-ca\") pod \"perses-operator-669c9f96b5-5r6dq\" (UID: \"8c7fdb9f-be0e-428a-88e1-283c31de8ad1\") " pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.788367 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tcrv\" (UniqueName: \"kubernetes.io/projected/8c7fdb9f-be0e-428a-88e1-283c31de8ad1-kube-api-access-6tcrv\") pod \"perses-operator-669c9f96b5-5r6dq\" (UID: \"8c7fdb9f-be0e-428a-88e1-283c31de8ad1\") " pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.805711 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp"] Jan 30 00:25:07 crc kubenswrapper[5103]: W0130 00:25:07.823169 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda88f3da2_a157_4b8b_9fe6_ff6ef7466a8d.slice/crio-8f700f980fe2979d89514d98be9c5d69db5f1f6fbf7db46df447a73234bd6778 WatchSource:0}: Error finding container 8f700f980fe2979d89514d98be9c5d69db5f1f6fbf7db46df447a73234bd6778: Status 404 returned error can't find the container with id 8f700f980fe2979d89514d98be9c5d69db5f1f6fbf7db46df447a73234bd6778 Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.886764 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.127700 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-5r6dq"] Jan 30 00:25:08 crc kubenswrapper[5103]: W0130 00:25:08.133166 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c7fdb9f_be0e_428a_88e1_283c31de8ad1.slice/crio-2fa6a3392dfdfa9eb7123ad0a451652edf6df9d16741b52fcb6744a4ab43fe38 WatchSource:0}: Error finding container 2fa6a3392dfdfa9eb7123ad0a451652edf6df9d16741b52fcb6744a4ab43fe38: Status 404 returned error can't find the container with id 2fa6a3392dfdfa9eb7123ad0a451652edf6df9d16741b52fcb6744a4ab43fe38 Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.175908 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-jcs7p"] Jan 30 00:25:08 crc kubenswrapper[5103]: W0130 00:25:08.179744 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7d2bde2_5437_4672_b6b6_f2babe73dff0.slice/crio-a50242013c5e625dd5ef6a1c002997b85fee0964d19cfd7eaaafbbe5c31eeee4 WatchSource:0}: Error finding container a50242013c5e625dd5ef6a1c002997b85fee0964d19cfd7eaaafbbe5c31eeee4: Status 404 returned error can't find the container with id a50242013c5e625dd5ef6a1c002997b85fee0964d19cfd7eaaafbbe5c31eeee4 Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.200960 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" event={"ID":"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d","Type":"ContainerStarted","Data":"8f700f980fe2979d89514d98be9c5d69db5f1f6fbf7db46df447a73234bd6778"} Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.202297 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" event={"ID":"888a411a-eaa9-4b4f-877b-0653ce686e73","Type":"ContainerStarted","Data":"870a5afa83e27078eaf7065dd2f85217830e3de67523125c47c4e1afe6e815dd"} Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.204873 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" event={"ID":"957968da-8046-4a89-91ac-ecb8c0e83e85","Type":"ContainerStarted","Data":"cfc450716b9dc4967b0b8bdc7c2d7267bad5d06a7ed252eeef41823ba91674f4"} Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.206356 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" event={"ID":"e7d2bde2-5437-4672-b6b6-f2babe73dff0","Type":"ContainerStarted","Data":"a50242013c5e625dd5ef6a1c002997b85fee0964d19cfd7eaaafbbe5c31eeee4"} Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.207274 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" event={"ID":"8c7fdb9f-be0e-428a-88e1-283c31de8ad1","Type":"ContainerStarted","Data":"2fa6a3392dfdfa9eb7123ad0a451652edf6df9d16741b52fcb6744a4ab43fe38"} Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.210247 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" event={"ID":"b1decb0e-49d8-404d-966d-b8249754982f","Type":"ContainerDied","Data":"d2add8cc90da4c85a774a104509a8c88fc2b9ac09cbfa8a920227e4cb1710844"} Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.210277 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2add8cc90da4c85a774a104509a8c88fc2b9ac09cbfa8a920227e4cb1710844" Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.210375 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:25:09 crc kubenswrapper[5103]: E0130 00:25:09.954192 5103 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:25:09 crc kubenswrapper[5103]: E0130 00:25:09.954374 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_openshift-marketplace(969009ac-f9ae-48c0-b45e-bf9a5844b7ff): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:25:09 crc kubenswrapper[5103]: E0130 00:25:09.955596 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:25:11 crc kubenswrapper[5103]: I0130 00:25:11.416226 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swfns_a7dd7e02-4357-4643-8c23-2fb57ba70405/kube-multus/0.log" Jan 30 00:25:11 crc kubenswrapper[5103]: I0130 00:25:11.419841 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swfns_a7dd7e02-4357-4643-8c23-2fb57ba70405/kube-multus/0.log" Jan 30 00:25:11 crc kubenswrapper[5103]: I0130 00:25:11.432479 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:25:11 crc kubenswrapper[5103]: I0130 00:25:11.437461 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.268802 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n"] Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.274858 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.277537 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-c57bl\"" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.277640 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.289387 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.294775 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n"] Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.321316 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67b9f41f-8bca-414a-aabd-5398b6f1ffe6-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-pl76n\" (UID: \"67b9f41f-8bca-414a-aabd-5398b6f1ffe6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.321378 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfsjh\" (UniqueName: \"kubernetes.io/projected/67b9f41f-8bca-414a-aabd-5398b6f1ffe6-kube-api-access-lfsjh\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-pl76n\" (UID: \"67b9f41f-8bca-414a-aabd-5398b6f1ffe6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.423127 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67b9f41f-8bca-414a-aabd-5398b6f1ffe6-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-pl76n\" (UID: \"67b9f41f-8bca-414a-aabd-5398b6f1ffe6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.423189 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lfsjh\" (UniqueName: \"kubernetes.io/projected/67b9f41f-8bca-414a-aabd-5398b6f1ffe6-kube-api-access-lfsjh\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-pl76n\" (UID: \"67b9f41f-8bca-414a-aabd-5398b6f1ffe6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.423765 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67b9f41f-8bca-414a-aabd-5398b6f1ffe6-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-pl76n\" (UID: \"67b9f41f-8bca-414a-aabd-5398b6f1ffe6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.460900 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfsjh\" (UniqueName: \"kubernetes.io/projected/67b9f41f-8bca-414a-aabd-5398b6f1ffe6-kube-api-access-lfsjh\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-pl76n\" (UID: \"67b9f41f-8bca-414a-aabd-5398b6f1ffe6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.608491 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:19 crc kubenswrapper[5103]: I0130 00:25:19.648292 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n"] Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.335664 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" event={"ID":"e7d2bde2-5437-4672-b6b6-f2babe73dff0","Type":"ContainerStarted","Data":"648d99cc723956ede678fd843e24f55e68b1ac8c566d35346e833284c9d1828e"} Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.335886 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.337107 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" event={"ID":"8c7fdb9f-be0e-428a-88e1-283c31de8ad1","Type":"ContainerStarted","Data":"be0579b4eb21394cdc71d98a5d8cde738d0c294a1eb8b412899f2605ced8d92d"} Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.337359 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.338525 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" event={"ID":"67b9f41f-8bca-414a-aabd-5398b6f1ffe6","Type":"ContainerStarted","Data":"5298064fa6c1d9b59596c4422ff9d2f2f7038900596d440633b997a05b4313aa"} Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.340263 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" event={"ID":"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d","Type":"ContainerStarted","Data":"ebd23426f4af1009cb12abd62a537705f14957a23150e93563be229fb417e68e"} Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.341844 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" event={"ID":"888a411a-eaa9-4b4f-877b-0653ce686e73","Type":"ContainerStarted","Data":"8e45ab069d765abb8028c0e37ab5817fff76356f6b91187b7cd925638c0600a5"} Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.343441 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" event={"ID":"957968da-8046-4a89-91ac-ecb8c0e83e85","Type":"ContainerStarted","Data":"ab45c908619101167cf24f626cd5f12cef86c0073baef3e5281ecef13e05355c"} Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.361892 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" podStartSLOduration=2.126964376 podStartE2EDuration="13.361878548s" podCreationTimestamp="2026-01-30 00:25:07 +0000 UTC" firstStartedPulling="2026-01-30 00:25:08.181471565 +0000 UTC m=+898.052969617" lastFinishedPulling="2026-01-30 00:25:19.416385737 +0000 UTC m=+909.287883789" observedRunningTime="2026-01-30 00:25:20.361105659 +0000 UTC m=+910.232603731" watchObservedRunningTime="2026-01-30 00:25:20.361878548 +0000 UTC m=+910.233376600" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.363863 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.388608 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" podStartSLOduration=2.701486364 podStartE2EDuration="14.388589123s" podCreationTimestamp="2026-01-30 00:25:06 +0000 UTC" firstStartedPulling="2026-01-30 00:25:07.695802207 +0000 UTC m=+897.567300259" lastFinishedPulling="2026-01-30 00:25:19.382904966 +0000 UTC m=+909.254403018" observedRunningTime="2026-01-30 00:25:20.386762518 +0000 UTC m=+910.258260580" watchObservedRunningTime="2026-01-30 00:25:20.388589123 +0000 UTC m=+910.260087165" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.412066 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" podStartSLOduration=1.835417461 podStartE2EDuration="13.412038729s" podCreationTimestamp="2026-01-30 00:25:07 +0000 UTC" firstStartedPulling="2026-01-30 00:25:07.826502994 +0000 UTC m=+897.698001046" lastFinishedPulling="2026-01-30 00:25:19.403124262 +0000 UTC m=+909.274622314" observedRunningTime="2026-01-30 00:25:20.406309168 +0000 UTC m=+910.277807230" watchObservedRunningTime="2026-01-30 00:25:20.412038729 +0000 UTC m=+910.283536781" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.439529 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" podStartSLOduration=1.840816984 podStartE2EDuration="13.439505223s" podCreationTimestamp="2026-01-30 00:25:07 +0000 UTC" firstStartedPulling="2026-01-30 00:25:07.783837457 +0000 UTC m=+897.655335509" lastFinishedPulling="2026-01-30 00:25:19.382525686 +0000 UTC m=+909.254023748" observedRunningTime="2026-01-30 00:25:20.433734801 +0000 UTC m=+910.305232873" watchObservedRunningTime="2026-01-30 00:25:20.439505223 +0000 UTC m=+910.311003275" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.455154 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" podStartSLOduration=2.208552328 podStartE2EDuration="13.455131646s" podCreationTimestamp="2026-01-30 00:25:07 +0000 UTC" firstStartedPulling="2026-01-30 00:25:08.138925341 +0000 UTC m=+898.010423393" lastFinishedPulling="2026-01-30 00:25:19.385504659 +0000 UTC m=+909.257002711" observedRunningTime="2026-01-30 00:25:20.451715082 +0000 UTC m=+910.323213144" watchObservedRunningTime="2026-01-30 00:25:20.455131646 +0000 UTC m=+910.326629698" Jan 30 00:25:20 crc kubenswrapper[5103]: E0130 00:25:20.879136 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:25:24 crc kubenswrapper[5103]: I0130 00:25:24.367231 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" event={"ID":"67b9f41f-8bca-414a-aabd-5398b6f1ffe6","Type":"ContainerStarted","Data":"41f0a232689046faa91204aa3f8bbf1dd9cc89b25d442db880b06d793e18dbf8"} Jan 30 00:25:24 crc kubenswrapper[5103]: I0130 00:25:24.389745 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" podStartSLOduration=6.150583442 podStartE2EDuration="9.389729174s" podCreationTimestamp="2026-01-30 00:25:15 +0000 UTC" firstStartedPulling="2026-01-30 00:25:19.655572686 +0000 UTC m=+909.527070738" lastFinishedPulling="2026-01-30 00:25:22.894718428 +0000 UTC m=+912.766216470" observedRunningTime="2026-01-30 00:25:24.386304309 +0000 UTC m=+914.257802381" watchObservedRunningTime="2026-01-30 00:25:24.389729174 +0000 UTC m=+914.261227226" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.139187 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-2l6mr"] Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.146421 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.150099 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.150282 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.159888 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-2l6mr"] Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.276496 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a9a4930-567c-4924-a3e4-a28fd367a358-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-2l6mr\" (UID: \"5a9a4930-567c-4924-a3e4-a28fd367a358\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.276548 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbr8f\" (UniqueName: \"kubernetes.io/projected/5a9a4930-567c-4924-a3e4-a28fd367a358-kube-api-access-jbr8f\") pod \"cert-manager-webhook-597b96b99b-2l6mr\" (UID: \"5a9a4930-567c-4924-a3e4-a28fd367a358\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.381287 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a9a4930-567c-4924-a3e4-a28fd367a358-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-2l6mr\" (UID: \"5a9a4930-567c-4924-a3e4-a28fd367a358\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.381355 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jbr8f\" (UniqueName: \"kubernetes.io/projected/5a9a4930-567c-4924-a3e4-a28fd367a358-kube-api-access-jbr8f\") pod \"cert-manager-webhook-597b96b99b-2l6mr\" (UID: \"5a9a4930-567c-4924-a3e4-a28fd367a358\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.413610 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a9a4930-567c-4924-a3e4-a28fd367a358-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-2l6mr\" (UID: \"5a9a4930-567c-4924-a3e4-a28fd367a358\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.414677 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbr8f\" (UniqueName: \"kubernetes.io/projected/5a9a4930-567c-4924-a3e4-a28fd367a358-kube-api-access-jbr8f\") pod \"cert-manager-webhook-597b96b99b-2l6mr\" (UID: \"5a9a4930-567c-4924-a3e4-a28fd367a358\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.460673 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.733267 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-2l6mr"] Jan 30 00:25:27 crc kubenswrapper[5103]: I0130 00:25:27.408990 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" event={"ID":"5a9a4930-567c-4924-a3e4-a28fd367a358","Type":"ContainerStarted","Data":"58edb06db0240c37bbccbb7d4f765f69e9919b1896479a20b7f81851f3cad749"} Jan 30 00:25:31 crc kubenswrapper[5103]: I0130 00:25:31.351996 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:32 crc kubenswrapper[5103]: E0130 00:25:32.106321 5103 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:25:32 crc kubenswrapper[5103]: E0130 00:25:32.106818 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_openshift-marketplace(969009ac-f9ae-48c0-b45e-bf9a5844b7ff): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:25:32 crc kubenswrapper[5103]: E0130 00:25:32.108013 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:25:32 crc kubenswrapper[5103]: I0130 00:25:32.928500 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-jw2mg"] Jan 30 00:25:32 crc kubenswrapper[5103]: I0130 00:25:32.935348 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:32 crc kubenswrapper[5103]: I0130 00:25:32.940865 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-jw2mg"] Jan 30 00:25:32 crc kubenswrapper[5103]: I0130 00:25:32.944828 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-tjl4n\"" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.080915 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c385ca3a-0d6e-45bd-9ac2-d2e884254487-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-jw2mg\" (UID: \"c385ca3a-0d6e-45bd-9ac2-d2e884254487\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.080967 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghxgm\" (UniqueName: \"kubernetes.io/projected/c385ca3a-0d6e-45bd-9ac2-d2e884254487-kube-api-access-ghxgm\") pod \"cert-manager-cainjector-8966b78d4-jw2mg\" (UID: \"c385ca3a-0d6e-45bd-9ac2-d2e884254487\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.182516 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c385ca3a-0d6e-45bd-9ac2-d2e884254487-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-jw2mg\" (UID: \"c385ca3a-0d6e-45bd-9ac2-d2e884254487\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.182600 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghxgm\" (UniqueName: \"kubernetes.io/projected/c385ca3a-0d6e-45bd-9ac2-d2e884254487-kube-api-access-ghxgm\") pod \"cert-manager-cainjector-8966b78d4-jw2mg\" (UID: \"c385ca3a-0d6e-45bd-9ac2-d2e884254487\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.203048 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c385ca3a-0d6e-45bd-9ac2-d2e884254487-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-jw2mg\" (UID: \"c385ca3a-0d6e-45bd-9ac2-d2e884254487\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.203076 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghxgm\" (UniqueName: \"kubernetes.io/projected/c385ca3a-0d6e-45bd-9ac2-d2e884254487-kube-api-access-ghxgm\") pod \"cert-manager-cainjector-8966b78d4-jw2mg\" (UID: \"c385ca3a-0d6e-45bd-9ac2-d2e884254487\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.260660 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.508663 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-jw2mg"] Jan 30 00:25:34 crc kubenswrapper[5103]: I0130 00:25:34.457526 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" event={"ID":"c385ca3a-0d6e-45bd-9ac2-d2e884254487","Type":"ContainerStarted","Data":"ce10f2ccbd30e876c749dbab6deef12dccfc4ad494b9d944318a21860b4c555c"} Jan 30 00:25:37 crc kubenswrapper[5103]: I0130 00:25:37.476613 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" event={"ID":"c385ca3a-0d6e-45bd-9ac2-d2e884254487","Type":"ContainerStarted","Data":"cae363e194e5ea8c9e412127092c8dc1d044f04f396d72d4c4c556a4cdb1a961"} Jan 30 00:25:37 crc kubenswrapper[5103]: I0130 00:25:37.477766 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" event={"ID":"5a9a4930-567c-4924-a3e4-a28fd367a358","Type":"ContainerStarted","Data":"e8b9ff3b04833909d5a2d1b5ed1e0bc713b835f87fda3eefd60721dca9dda58a"} Jan 30 00:25:37 crc kubenswrapper[5103]: I0130 00:25:37.478072 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:37 crc kubenswrapper[5103]: I0130 00:25:37.490709 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" podStartSLOduration=1.9689724690000001 podStartE2EDuration="5.490695265s" podCreationTimestamp="2026-01-30 00:25:32 +0000 UTC" firstStartedPulling="2026-01-30 00:25:33.513656476 +0000 UTC m=+923.385154528" lastFinishedPulling="2026-01-30 00:25:37.035379272 +0000 UTC m=+926.906877324" observedRunningTime="2026-01-30 00:25:37.489104966 +0000 UTC m=+927.360603028" watchObservedRunningTime="2026-01-30 00:25:37.490695265 +0000 UTC m=+927.362193317" Jan 30 00:25:37 crc kubenswrapper[5103]: I0130 00:25:37.512757 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" podStartSLOduration=1.238430475 podStartE2EDuration="11.512738766s" podCreationTimestamp="2026-01-30 00:25:26 +0000 UTC" firstStartedPulling="2026-01-30 00:25:26.7390235 +0000 UTC m=+916.610521552" lastFinishedPulling="2026-01-30 00:25:37.013331791 +0000 UTC m=+926.884829843" observedRunningTime="2026-01-30 00:25:37.511429044 +0000 UTC m=+927.382927106" watchObservedRunningTime="2026-01-30 00:25:37.512738766 +0000 UTC m=+927.384236828" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.466226 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-nxjsj"] Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.470778 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.473333 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-2cvbr\"" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.474527 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-nxjsj"] Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.583852 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4645f5f-5b75-41a8-8a06-0a9b5be3e07f-bound-sa-token\") pod \"cert-manager-759f64656b-nxjsj\" (UID: \"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f\") " pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.583975 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xjqq\" (UniqueName: \"kubernetes.io/projected/a4645f5f-5b75-41a8-8a06-0a9b5be3e07f-kube-api-access-6xjqq\") pod \"cert-manager-759f64656b-nxjsj\" (UID: \"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f\") " pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.685158 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4645f5f-5b75-41a8-8a06-0a9b5be3e07f-bound-sa-token\") pod \"cert-manager-759f64656b-nxjsj\" (UID: \"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f\") " pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.685223 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6xjqq\" (UniqueName: \"kubernetes.io/projected/a4645f5f-5b75-41a8-8a06-0a9b5be3e07f-kube-api-access-6xjqq\") pod \"cert-manager-759f64656b-nxjsj\" (UID: \"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f\") " pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.706192 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4645f5f-5b75-41a8-8a06-0a9b5be3e07f-bound-sa-token\") pod \"cert-manager-759f64656b-nxjsj\" (UID: \"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f\") " pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.716002 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xjqq\" (UniqueName: \"kubernetes.io/projected/a4645f5f-5b75-41a8-8a06-0a9b5be3e07f-kube-api-access-6xjqq\") pod \"cert-manager-759f64656b-nxjsj\" (UID: \"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f\") " pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.828893 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:40 crc kubenswrapper[5103]: I0130 00:25:40.311189 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-nxjsj"] Jan 30 00:25:40 crc kubenswrapper[5103]: I0130 00:25:40.527383 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-nxjsj" event={"ID":"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f","Type":"ContainerStarted","Data":"2390831b65b12887df4245ee0387db9210558e26783f75b78fb7d1dd9c53239c"} Jan 30 00:25:40 crc kubenswrapper[5103]: I0130 00:25:40.527651 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-nxjsj" event={"ID":"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f","Type":"ContainerStarted","Data":"eaf6630deb209a6fee9b99ff99bed41bc39273947b8aedc616f49ffe0ac86ef7"} Jan 30 00:25:40 crc kubenswrapper[5103]: I0130 00:25:40.555803 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-nxjsj" podStartSLOduration=1.555786986 podStartE2EDuration="1.555786986s" podCreationTimestamp="2026-01-30 00:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:25:40.555165491 +0000 UTC m=+930.426663553" watchObservedRunningTime="2026-01-30 00:25:40.555786986 +0000 UTC m=+930.427285038" Jan 30 00:25:43 crc kubenswrapper[5103]: I0130 00:25:43.489890 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:44 crc kubenswrapper[5103]: E0130 00:25:44.870977 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:25:57 crc kubenswrapper[5103]: E0130 00:25:57.870642 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:25:58 crc kubenswrapper[5103]: I0130 00:25:58.493111 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:25:58 crc kubenswrapper[5103]: I0130 00:25:58.493201 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.140568 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495546-b8rgh"] Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.147673 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.151125 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.151261 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.151261 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.152584 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-b8rgh"] Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.279444 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcg4j\" (UniqueName: \"kubernetes.io/projected/d4b28226-5bd7-4b43-aec3-648633cbde03-kube-api-access-gcg4j\") pod \"auto-csr-approver-29495546-b8rgh\" (UID: \"d4b28226-5bd7-4b43-aec3-648633cbde03\") " pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.380524 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gcg4j\" (UniqueName: \"kubernetes.io/projected/d4b28226-5bd7-4b43-aec3-648633cbde03-kube-api-access-gcg4j\") pod \"auto-csr-approver-29495546-b8rgh\" (UID: \"d4b28226-5bd7-4b43-aec3-648633cbde03\") " pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.408577 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcg4j\" (UniqueName: \"kubernetes.io/projected/d4b28226-5bd7-4b43-aec3-648633cbde03-kube-api-access-gcg4j\") pod \"auto-csr-approver-29495546-b8rgh\" (UID: \"d4b28226-5bd7-4b43-aec3-648633cbde03\") " pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.473789 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.935435 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-b8rgh"] Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.945849 5103 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:26:01 crc kubenswrapper[5103]: I0130 00:26:01.676749 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" event={"ID":"d4b28226-5bd7-4b43-aec3-648633cbde03","Type":"ContainerStarted","Data":"c0e7ad70a4c4e49251fc72024b8ec3c64147cb0ffa6e464b95896129e850700d"} Jan 30 00:26:02 crc kubenswrapper[5103]: I0130 00:26:02.689526 5103 generic.go:358] "Generic (PLEG): container finished" podID="d4b28226-5bd7-4b43-aec3-648633cbde03" containerID="013351321e5d41d2ce75b5cd9d1d61d2f2152944d779218c070bb3e09843c3f2" exitCode=0 Jan 30 00:26:02 crc kubenswrapper[5103]: I0130 00:26:02.689680 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" event={"ID":"d4b28226-5bd7-4b43-aec3-648633cbde03","Type":"ContainerDied","Data":"013351321e5d41d2ce75b5cd9d1d61d2f2152944d779218c070bb3e09843c3f2"} Jan 30 00:26:03 crc kubenswrapper[5103]: I0130 00:26:03.966201 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:04 crc kubenswrapper[5103]: I0130 00:26:04.031422 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcg4j\" (UniqueName: \"kubernetes.io/projected/d4b28226-5bd7-4b43-aec3-648633cbde03-kube-api-access-gcg4j\") pod \"d4b28226-5bd7-4b43-aec3-648633cbde03\" (UID: \"d4b28226-5bd7-4b43-aec3-648633cbde03\") " Jan 30 00:26:04 crc kubenswrapper[5103]: I0130 00:26:04.036522 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b28226-5bd7-4b43-aec3-648633cbde03-kube-api-access-gcg4j" (OuterVolumeSpecName: "kube-api-access-gcg4j") pod "d4b28226-5bd7-4b43-aec3-648633cbde03" (UID: "d4b28226-5bd7-4b43-aec3-648633cbde03"). InnerVolumeSpecName "kube-api-access-gcg4j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:26:04 crc kubenswrapper[5103]: I0130 00:26:04.132709 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gcg4j\" (UniqueName: \"kubernetes.io/projected/d4b28226-5bd7-4b43-aec3-648633cbde03-kube-api-access-gcg4j\") on node \"crc\" DevicePath \"\"" Jan 30 00:26:04 crc kubenswrapper[5103]: I0130 00:26:04.705272 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:04 crc kubenswrapper[5103]: I0130 00:26:04.705294 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" event={"ID":"d4b28226-5bd7-4b43-aec3-648633cbde03","Type":"ContainerDied","Data":"c0e7ad70a4c4e49251fc72024b8ec3c64147cb0ffa6e464b95896129e850700d"} Jan 30 00:26:04 crc kubenswrapper[5103]: I0130 00:26:04.705788 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0e7ad70a4c4e49251fc72024b8ec3c64147cb0ffa6e464b95896129e850700d" Jan 30 00:26:05 crc kubenswrapper[5103]: I0130 00:26:05.029202 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-rtq7h"] Jan 30 00:26:05 crc kubenswrapper[5103]: I0130 00:26:05.032939 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-rtq7h"] Jan 30 00:26:06 crc kubenswrapper[5103]: I0130 00:26:06.880213 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b2c0b7-a88b-4f50-945a-938210a1c4cc" path="/var/lib/kubelet/pods/d6b2c0b7-a88b-4f50-945a-938210a1c4cc/volumes" Jan 30 00:26:10 crc kubenswrapper[5103]: E0130 00:26:10.872520 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:26:19 crc kubenswrapper[5103]: I0130 00:26:19.344663 5103 scope.go:117] "RemoveContainer" containerID="2aa077047165a4cd73187258a4227191c8d3c969d4671d6a4bcf6e0c0698cf60" Jan 30 00:26:24 crc kubenswrapper[5103]: E0130 00:26:24.808510 5103 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:26:24 crc kubenswrapper[5103]: E0130 00:26:24.809455 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_openshift-marketplace(969009ac-f9ae-48c0-b45e-bf9a5844b7ff): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:26:24 crc kubenswrapper[5103]: E0130 00:26:24.810648 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:26:28 crc kubenswrapper[5103]: I0130 00:26:28.493315 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:26:28 crc kubenswrapper[5103]: I0130 00:26:28.493967 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:26:38 crc kubenswrapper[5103]: E0130 00:26:38.870517 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:26:51 crc kubenswrapper[5103]: E0130 00:26:51.872190 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:26:58 crc kubenswrapper[5103]: I0130 00:26:58.493571 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:26:58 crc kubenswrapper[5103]: I0130 00:26:58.494379 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:26:58 crc kubenswrapper[5103]: I0130 00:26:58.494479 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:26:58 crc kubenswrapper[5103]: I0130 00:26:58.495999 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90100aee52a55e7e0a5d62a8eaebe5bce65c117790d10db4419165f35e2674a5"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:26:58 crc kubenswrapper[5103]: I0130 00:26:58.496189 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://90100aee52a55e7e0a5d62a8eaebe5bce65c117790d10db4419165f35e2674a5" gracePeriod=600 Jan 30 00:26:59 crc kubenswrapper[5103]: I0130 00:26:59.069191 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="90100aee52a55e7e0a5d62a8eaebe5bce65c117790d10db4419165f35e2674a5" exitCode=0 Jan 30 00:26:59 crc kubenswrapper[5103]: I0130 00:26:59.069238 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"90100aee52a55e7e0a5d62a8eaebe5bce65c117790d10db4419165f35e2674a5"} Jan 30 00:26:59 crc kubenswrapper[5103]: I0130 00:26:59.069937 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"5856a55629d30d7d4fa88897c40b9c60d5a5ab3108816f0b3070ca166d2f7fac"} Jan 30 00:26:59 crc kubenswrapper[5103]: I0130 00:26:59.069965 5103 scope.go:117] "RemoveContainer" containerID="3697ed82987151cb30b4fcbd0a44d2a69c948067ae7968004a45b0cf18254730" Jan 30 00:27:06 crc kubenswrapper[5103]: E0130 00:27:06.876029 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:27:20 crc kubenswrapper[5103]: E0130 00:27:20.882654 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:27:34 crc kubenswrapper[5103]: E0130 00:27:34.871319 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:27:47 crc kubenswrapper[5103]: E0130 00:27:47.116445 5103 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:27:47 crc kubenswrapper[5103]: E0130 00:27:47.117491 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_openshift-marketplace(969009ac-f9ae-48c0-b45e-bf9a5844b7ff): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:27:47 crc kubenswrapper[5103]: E0130 00:27:47.119603 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.146407 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495548-mfl5j"] Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.148242 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4b28226-5bd7-4b43-aec3-648633cbde03" containerName="oc" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.148274 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b28226-5bd7-4b43-aec3-648633cbde03" containerName="oc" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.148475 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="d4b28226-5bd7-4b43-aec3-648633cbde03" containerName="oc" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.156183 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.156426 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495548-mfl5j"] Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.163705 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.163953 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.163987 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.268110 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxtmb\" (UniqueName: \"kubernetes.io/projected/7e1187f4-b882-49e8-b76a-6a33d208d851-kube-api-access-wxtmb\") pod \"auto-csr-approver-29495548-mfl5j\" (UID: \"7e1187f4-b882-49e8-b76a-6a33d208d851\") " pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.369715 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wxtmb\" (UniqueName: \"kubernetes.io/projected/7e1187f4-b882-49e8-b76a-6a33d208d851-kube-api-access-wxtmb\") pod \"auto-csr-approver-29495548-mfl5j\" (UID: \"7e1187f4-b882-49e8-b76a-6a33d208d851\") " pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.396956 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxtmb\" (UniqueName: \"kubernetes.io/projected/7e1187f4-b882-49e8-b76a-6a33d208d851-kube-api-access-wxtmb\") pod \"auto-csr-approver-29495548-mfl5j\" (UID: \"7e1187f4-b882-49e8-b76a-6a33d208d851\") " pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.480475 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.751361 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495548-mfl5j"] Jan 30 00:28:01 crc kubenswrapper[5103]: I0130 00:28:01.567353 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" event={"ID":"7e1187f4-b882-49e8-b76a-6a33d208d851","Type":"ContainerStarted","Data":"5bcf9357173d88e6df0d3b2e33dd2abdd84d18c5348156fff5fb2c30bd6cd088"} Jan 30 00:28:01 crc kubenswrapper[5103]: E0130 00:28:01.870368 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:28:02 crc kubenswrapper[5103]: I0130 00:28:02.587639 5103 generic.go:358] "Generic (PLEG): container finished" podID="7e1187f4-b882-49e8-b76a-6a33d208d851" containerID="c2af859b78905cccbd737ba86e5a69188e15dc8b11ba5934dd036e2c842496f3" exitCode=0 Jan 30 00:28:02 crc kubenswrapper[5103]: I0130 00:28:02.587701 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" event={"ID":"7e1187f4-b882-49e8-b76a-6a33d208d851","Type":"ContainerDied","Data":"c2af859b78905cccbd737ba86e5a69188e15dc8b11ba5934dd036e2c842496f3"} Jan 30 00:28:03 crc kubenswrapper[5103]: I0130 00:28:03.981025 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:04 crc kubenswrapper[5103]: I0130 00:28:04.121306 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxtmb\" (UniqueName: \"kubernetes.io/projected/7e1187f4-b882-49e8-b76a-6a33d208d851-kube-api-access-wxtmb\") pod \"7e1187f4-b882-49e8-b76a-6a33d208d851\" (UID: \"7e1187f4-b882-49e8-b76a-6a33d208d851\") " Jan 30 00:28:04 crc kubenswrapper[5103]: I0130 00:28:04.133316 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e1187f4-b882-49e8-b76a-6a33d208d851-kube-api-access-wxtmb" (OuterVolumeSpecName: "kube-api-access-wxtmb") pod "7e1187f4-b882-49e8-b76a-6a33d208d851" (UID: "7e1187f4-b882-49e8-b76a-6a33d208d851"). InnerVolumeSpecName "kube-api-access-wxtmb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:28:04 crc kubenswrapper[5103]: I0130 00:28:04.223681 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wxtmb\" (UniqueName: \"kubernetes.io/projected/7e1187f4-b882-49e8-b76a-6a33d208d851-kube-api-access-wxtmb\") on node \"crc\" DevicePath \"\"" Jan 30 00:28:04 crc kubenswrapper[5103]: I0130 00:28:04.603787 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" event={"ID":"7e1187f4-b882-49e8-b76a-6a33d208d851","Type":"ContainerDied","Data":"5bcf9357173d88e6df0d3b2e33dd2abdd84d18c5348156fff5fb2c30bd6cd088"} Jan 30 00:28:04 crc kubenswrapper[5103]: I0130 00:28:04.604065 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bcf9357173d88e6df0d3b2e33dd2abdd84d18c5348156fff5fb2c30bd6cd088" Jan 30 00:28:04 crc kubenswrapper[5103]: I0130 00:28:04.603936 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:05 crc kubenswrapper[5103]: I0130 00:28:05.061354 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-lzgvl"] Jan 30 00:28:05 crc kubenswrapper[5103]: I0130 00:28:05.070496 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-lzgvl"] Jan 30 00:28:06 crc kubenswrapper[5103]: I0130 00:28:06.879160 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6eabbd6-7a3e-476d-9412-948faeb44ce2" path="/var/lib/kubelet/pods/b6eabbd6-7a3e-476d-9412-948faeb44ce2/volumes" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.230196 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-twd9h/must-gather-9ltrv"] Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.231826 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e1187f4-b882-49e8-b76a-6a33d208d851" containerName="oc" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.231841 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e1187f4-b882-49e8-b76a-6a33d208d851" containerName="oc" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.231996 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="7e1187f4-b882-49e8-b76a-6a33d208d851" containerName="oc" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.243996 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.246441 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-twd9h/must-gather-9ltrv"] Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.248375 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-twd9h\"/\"openshift-service-ca.crt\"" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.248491 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-twd9h\"/\"default-dockercfg-z9hl2\"" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.248910 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-twd9h\"/\"kube-root-ca.crt\"" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.343761 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59tff\" (UniqueName: \"kubernetes.io/projected/5d4f962b-cbec-41d6-9514-8d19a9455156-kube-api-access-59tff\") pod \"must-gather-9ltrv\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.343923 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d4f962b-cbec-41d6-9514-8d19a9455156-must-gather-output\") pod \"must-gather-9ltrv\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.444992 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d4f962b-cbec-41d6-9514-8d19a9455156-must-gather-output\") pod \"must-gather-9ltrv\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.445128 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-59tff\" (UniqueName: \"kubernetes.io/projected/5d4f962b-cbec-41d6-9514-8d19a9455156-kube-api-access-59tff\") pod \"must-gather-9ltrv\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.445551 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d4f962b-cbec-41d6-9514-8d19a9455156-must-gather-output\") pod \"must-gather-9ltrv\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.469688 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-59tff\" (UniqueName: \"kubernetes.io/projected/5d4f962b-cbec-41d6-9514-8d19a9455156-kube-api-access-59tff\") pod \"must-gather-9ltrv\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.573493 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.852924 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-twd9h/must-gather-9ltrv"] Jan 30 00:28:12 crc kubenswrapper[5103]: W0130 00:28:12.857724 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d4f962b_cbec_41d6_9514_8d19a9455156.slice/crio-93995129edc69df60d55c656961a88d9f887029b9a30c064c5fa1c106695e0f7 WatchSource:0}: Error finding container 93995129edc69df60d55c656961a88d9f887029b9a30c064c5fa1c106695e0f7: Status 404 returned error can't find the container with id 93995129edc69df60d55c656961a88d9f887029b9a30c064c5fa1c106695e0f7 Jan 30 00:28:13 crc kubenswrapper[5103]: I0130 00:28:13.682270 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-twd9h/must-gather-9ltrv" event={"ID":"5d4f962b-cbec-41d6-9514-8d19a9455156","Type":"ContainerStarted","Data":"93995129edc69df60d55c656961a88d9f887029b9a30c064c5fa1c106695e0f7"} Jan 30 00:28:15 crc kubenswrapper[5103]: E0130 00:28:15.870662 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:28:18 crc kubenswrapper[5103]: I0130 00:28:18.717466 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-twd9h/must-gather-9ltrv" event={"ID":"5d4f962b-cbec-41d6-9514-8d19a9455156","Type":"ContainerStarted","Data":"47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da"} Jan 30 00:28:18 crc kubenswrapper[5103]: I0130 00:28:18.717832 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-twd9h/must-gather-9ltrv" event={"ID":"5d4f962b-cbec-41d6-9514-8d19a9455156","Type":"ContainerStarted","Data":"93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90"} Jan 30 00:28:18 crc kubenswrapper[5103]: I0130 00:28:18.738141 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-twd9h/must-gather-9ltrv" podStartSLOduration=1.596602337 podStartE2EDuration="6.738115719s" podCreationTimestamp="2026-01-30 00:28:12 +0000 UTC" firstStartedPulling="2026-01-30 00:28:12.859861929 +0000 UTC m=+1082.731359981" lastFinishedPulling="2026-01-30 00:28:18.001375311 +0000 UTC m=+1087.872873363" observedRunningTime="2026-01-30 00:28:18.735250839 +0000 UTC m=+1088.606748931" watchObservedRunningTime="2026-01-30 00:28:18.738115719 +0000 UTC m=+1088.609613811" Jan 30 00:28:19 crc kubenswrapper[5103]: I0130 00:28:19.464661 5103 scope.go:117] "RemoveContainer" containerID="85eb57e0bc83856f4d4d5eb131d80fc4f6400f67738b8a99f839b0af0918444e" Jan 30 00:28:29 crc kubenswrapper[5103]: E0130 00:28:29.874245 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:28:41 crc kubenswrapper[5103]: E0130 00:28:41.872838 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:28:54 crc kubenswrapper[5103]: E0130 00:28:54.874692 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:28:58 crc kubenswrapper[5103]: I0130 00:28:58.493301 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:28:58 crc kubenswrapper[5103]: I0130 00:28:58.493702 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:29:01 crc kubenswrapper[5103]: I0130 00:29:01.282078 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-94r9t_35998b47-ed37-4a50-9553-18147918d9cb/control-plane-machine-set-operator/0.log" Jan 30 00:29:01 crc kubenswrapper[5103]: I0130 00:29:01.440031 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-5tp7b_f3b3db2b-ab99-483b-a13c-4947269bc330/kube-rbac-proxy/0.log" Jan 30 00:29:01 crc kubenswrapper[5103]: I0130 00:29:01.492290 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-5tp7b_f3b3db2b-ab99-483b-a13c-4947269bc330/machine-api-operator/0.log" Jan 30 00:29:06 crc kubenswrapper[5103]: E0130 00:29:06.870955 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:29:14 crc kubenswrapper[5103]: I0130 00:29:14.156524 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-nxjsj_a4645f5f-5b75-41a8-8a06-0a9b5be3e07f/cert-manager-controller/0.log" Jan 30 00:29:14 crc kubenswrapper[5103]: I0130 00:29:14.241042 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-jw2mg_c385ca3a-0d6e-45bd-9ac2-d2e884254487/cert-manager-cainjector/0.log" Jan 30 00:29:14 crc kubenswrapper[5103]: I0130 00:29:14.310772 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-2l6mr_5a9a4930-567c-4924-a3e4-a28fd367a358/cert-manager-webhook/0.log" Jan 30 00:29:19 crc kubenswrapper[5103]: E0130 00:29:19.870632 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:29:27 crc kubenswrapper[5103]: I0130 00:29:27.823283 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-mmf75_957968da-8046-4a89-91ac-ecb8c0e83e85/prometheus-operator/0.log" Jan 30 00:29:27 crc kubenswrapper[5103]: I0130 00:29:27.904316 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp_a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d/prometheus-operator-admission-webhook/0.log" Jan 30 00:29:28 crc kubenswrapper[5103]: I0130 00:29:28.012177 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf_888a411a-eaa9-4b4f-877b-0653ce686e73/prometheus-operator-admission-webhook/0.log" Jan 30 00:29:28 crc kubenswrapper[5103]: I0130 00:29:28.080527 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-jcs7p_e7d2bde2-5437-4672-b6b6-f2babe73dff0/operator/0.log" Jan 30 00:29:28 crc kubenswrapper[5103]: I0130 00:29:28.203979 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-5r6dq_8c7fdb9f-be0e-428a-88e1-283c31de8ad1/perses-operator/0.log" Jan 30 00:29:28 crc kubenswrapper[5103]: I0130 00:29:28.493601 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:29:28 crc kubenswrapper[5103]: I0130 00:29:28.494162 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:29:30 crc kubenswrapper[5103]: E0130 00:29:30.886186 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:29:41 crc kubenswrapper[5103]: E0130 00:29:41.870650 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.141343 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_969009ac-f9ae-48c0-b45e-bf9a5844b7ff/util/0.log" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.299490 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_969009ac-f9ae-48c0-b45e-bf9a5844b7ff/util/0.log" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.511040 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_969009ac-f9ae-48c0-b45e-bf9a5844b7ff/util/0.log" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.663709 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/util/0.log" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.865199 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/util/0.log" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.882126 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/pull/0.log" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.923842 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/pull/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.048330 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/extract/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.055313 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/util/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.078968 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/pull/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.217641 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/util/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.370550 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/pull/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.372706 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/util/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.412219 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/pull/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.555285 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/util/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.556860 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/extract/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.560774 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/pull/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.709689 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/extract-utilities/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.863680 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/extract-utilities/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.898481 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/extract-content/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.910890 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/extract-content/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.086240 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/extract-content/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.123856 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/extract-utilities/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.227901 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/registry-server/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.260648 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/extract-utilities/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.440257 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/extract-content/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.449800 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/extract-content/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.457874 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/extract-utilities/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.640497 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/extract-utilities/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.646153 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/extract-content/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.773666 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-m7wbv_0180b3c6-131f-4a8c-ac9a-1b410e056ae2/marketplace-operator/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.838294 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/registry-server/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.857514 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/extract-utilities/0.log" Jan 30 00:29:45 crc kubenswrapper[5103]: I0130 00:29:45.030620 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/extract-content/0.log" Jan 30 00:29:45 crc kubenswrapper[5103]: I0130 00:29:45.035625 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/extract-utilities/0.log" Jan 30 00:29:45 crc kubenswrapper[5103]: I0130 00:29:45.049358 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/extract-content/0.log" Jan 30 00:29:45 crc kubenswrapper[5103]: I0130 00:29:45.176133 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/extract-utilities/0.log" Jan 30 00:29:45 crc kubenswrapper[5103]: I0130 00:29:45.197917 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/extract-content/0.log" Jan 30 00:29:45 crc kubenswrapper[5103]: I0130 00:29:45.327804 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/registry-server/0.log" Jan 30 00:29:54 crc kubenswrapper[5103]: E0130 00:29:54.871317 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:29:57 crc kubenswrapper[5103]: I0130 00:29:57.809612 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp_a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d/prometheus-operator-admission-webhook/0.log" Jan 30 00:29:57 crc kubenswrapper[5103]: I0130 00:29:57.826548 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-mmf75_957968da-8046-4a89-91ac-ecb8c0e83e85/prometheus-operator/0.log" Jan 30 00:29:57 crc kubenswrapper[5103]: I0130 00:29:57.839959 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf_888a411a-eaa9-4b4f-877b-0653ce686e73/prometheus-operator-admission-webhook/0.log" Jan 30 00:29:57 crc kubenswrapper[5103]: I0130 00:29:57.910902 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-jcs7p_e7d2bde2-5437-4672-b6b6-f2babe73dff0/operator/0.log" Jan 30 00:29:57 crc kubenswrapper[5103]: I0130 00:29:57.962415 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-5r6dq_8c7fdb9f-be0e-428a-88e1-283c31de8ad1/perses-operator/0.log" Jan 30 00:29:58 crc kubenswrapper[5103]: I0130 00:29:58.492820 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:29:58 crc kubenswrapper[5103]: I0130 00:29:58.492910 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:29:58 crc kubenswrapper[5103]: I0130 00:29:58.492965 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:29:58 crc kubenswrapper[5103]: I0130 00:29:58.493642 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5856a55629d30d7d4fa88897c40b9c60d5a5ab3108816f0b3070ca166d2f7fac"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:29:58 crc kubenswrapper[5103]: I0130 00:29:58.493719 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://5856a55629d30d7d4fa88897c40b9c60d5a5ab3108816f0b3070ca166d2f7fac" gracePeriod=600 Jan 30 00:29:59 crc kubenswrapper[5103]: I0130 00:29:59.366258 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"5856a55629d30d7d4fa88897c40b9c60d5a5ab3108816f0b3070ca166d2f7fac"} Jan 30 00:29:59 crc kubenswrapper[5103]: I0130 00:29:59.366233 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="5856a55629d30d7d4fa88897c40b9c60d5a5ab3108816f0b3070ca166d2f7fac" exitCode=0 Jan 30 00:29:59 crc kubenswrapper[5103]: I0130 00:29:59.366635 5103 scope.go:117] "RemoveContainer" containerID="90100aee52a55e7e0a5d62a8eaebe5bce65c117790d10db4419165f35e2674a5" Jan 30 00:29:59 crc kubenswrapper[5103]: I0130 00:29:59.366708 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"130ec4cee10ecca2dbb2494485497c4c74cdc5486f58130b4af70c708c33184f"} Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.135985 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495550-h2pkv"] Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.141264 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.141659 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm"] Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.143591 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.145021 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.146853 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.157338 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495550-h2pkv"] Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.157377 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm"] Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.157523 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.179698 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.181543 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.249459 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjcsv\" (UniqueName: \"kubernetes.io/projected/d9a13ac8-6221-4293-b335-523278207648-kube-api-access-rjcsv\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.249788 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9a13ac8-6221-4293-b335-523278207648-secret-volume\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.249809 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a13ac8-6221-4293-b335-523278207648-config-volume\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.249829 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sg8j\" (UniqueName: \"kubernetes.io/projected/b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b-kube-api-access-6sg8j\") pod \"auto-csr-approver-29495550-h2pkv\" (UID: \"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b\") " pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.351241 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rjcsv\" (UniqueName: \"kubernetes.io/projected/d9a13ac8-6221-4293-b335-523278207648-kube-api-access-rjcsv\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.351288 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9a13ac8-6221-4293-b335-523278207648-secret-volume\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.351307 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a13ac8-6221-4293-b335-523278207648-config-volume\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.351457 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6sg8j\" (UniqueName: \"kubernetes.io/projected/b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b-kube-api-access-6sg8j\") pod \"auto-csr-approver-29495550-h2pkv\" (UID: \"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b\") " pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.352356 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a13ac8-6221-4293-b335-523278207648-config-volume\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.358475 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9a13ac8-6221-4293-b335-523278207648-secret-volume\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.369515 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjcsv\" (UniqueName: \"kubernetes.io/projected/d9a13ac8-6221-4293-b335-523278207648-kube-api-access-rjcsv\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.383320 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sg8j\" (UniqueName: \"kubernetes.io/projected/b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b-kube-api-access-6sg8j\") pod \"auto-csr-approver-29495550-h2pkv\" (UID: \"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b\") " pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.456506 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.482130 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.668207 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495550-h2pkv"] Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.709077 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm"] Jan 30 00:30:00 crc kubenswrapper[5103]: W0130 00:30:00.714143 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9a13ac8_6221_4293_b335_523278207648.slice/crio-99319fd205de14d71c5f414e182b53d333a3c99c51c387a4a559430e31052c41 WatchSource:0}: Error finding container 99319fd205de14d71c5f414e182b53d333a3c99c51c387a4a559430e31052c41: Status 404 returned error can't find the container with id 99319fd205de14d71c5f414e182b53d333a3c99c51c387a4a559430e31052c41 Jan 30 00:30:01 crc kubenswrapper[5103]: I0130 00:30:01.384833 5103 generic.go:358] "Generic (PLEG): container finished" podID="d9a13ac8-6221-4293-b335-523278207648" containerID="f158a1749d81beabc76f102764ffb4986db8d74961f96e0199513925785628df" exitCode=0 Jan 30 00:30:01 crc kubenswrapper[5103]: I0130 00:30:01.384897 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" event={"ID":"d9a13ac8-6221-4293-b335-523278207648","Type":"ContainerDied","Data":"f158a1749d81beabc76f102764ffb4986db8d74961f96e0199513925785628df"} Jan 30 00:30:01 crc kubenswrapper[5103]: I0130 00:30:01.385225 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" event={"ID":"d9a13ac8-6221-4293-b335-523278207648","Type":"ContainerStarted","Data":"99319fd205de14d71c5f414e182b53d333a3c99c51c387a4a559430e31052c41"} Jan 30 00:30:01 crc kubenswrapper[5103]: I0130 00:30:01.388889 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" event={"ID":"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b","Type":"ContainerStarted","Data":"65796c5027130313151c38769c983755d4471154a7b9d3fb4f885fb6d7f10ea5"} Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.683636 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.798573 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a13ac8-6221-4293-b335-523278207648-config-volume\") pod \"d9a13ac8-6221-4293-b335-523278207648\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.798642 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9a13ac8-6221-4293-b335-523278207648-secret-volume\") pod \"d9a13ac8-6221-4293-b335-523278207648\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.798769 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjcsv\" (UniqueName: \"kubernetes.io/projected/d9a13ac8-6221-4293-b335-523278207648-kube-api-access-rjcsv\") pod \"d9a13ac8-6221-4293-b335-523278207648\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.801129 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9a13ac8-6221-4293-b335-523278207648-config-volume" (OuterVolumeSpecName: "config-volume") pod "d9a13ac8-6221-4293-b335-523278207648" (UID: "d9a13ac8-6221-4293-b335-523278207648"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.807616 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9a13ac8-6221-4293-b335-523278207648-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d9a13ac8-6221-4293-b335-523278207648" (UID: "d9a13ac8-6221-4293-b335-523278207648"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.807648 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a13ac8-6221-4293-b335-523278207648-kube-api-access-rjcsv" (OuterVolumeSpecName: "kube-api-access-rjcsv") pod "d9a13ac8-6221-4293-b335-523278207648" (UID: "d9a13ac8-6221-4293-b335-523278207648"). InnerVolumeSpecName "kube-api-access-rjcsv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.900371 5103 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a13ac8-6221-4293-b335-523278207648-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.900418 5103 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9a13ac8-6221-4293-b335-523278207648-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.900436 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rjcsv\" (UniqueName: \"kubernetes.io/projected/d9a13ac8-6221-4293-b335-523278207648-kube-api-access-rjcsv\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:03 crc kubenswrapper[5103]: I0130 00:30:03.406073 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" event={"ID":"d9a13ac8-6221-4293-b335-523278207648","Type":"ContainerDied","Data":"99319fd205de14d71c5f414e182b53d333a3c99c51c387a4a559430e31052c41"} Jan 30 00:30:03 crc kubenswrapper[5103]: I0130 00:30:03.406347 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99319fd205de14d71c5f414e182b53d333a3c99c51c387a4a559430e31052c41" Jan 30 00:30:03 crc kubenswrapper[5103]: I0130 00:30:03.406202 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:03 crc kubenswrapper[5103]: I0130 00:30:03.410493 5103 generic.go:358] "Generic (PLEG): container finished" podID="b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b" containerID="1fd4fa358eb20ef4f4388ad86d8aea58f2fc537950e57966f555eb8df763b409" exitCode=0 Jan 30 00:30:03 crc kubenswrapper[5103]: I0130 00:30:03.410550 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" event={"ID":"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b","Type":"ContainerDied","Data":"1fd4fa358eb20ef4f4388ad86d8aea58f2fc537950e57966f555eb8df763b409"} Jan 30 00:30:04 crc kubenswrapper[5103]: I0130 00:30:04.709080 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:04 crc kubenswrapper[5103]: I0130 00:30:04.830484 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sg8j\" (UniqueName: \"kubernetes.io/projected/b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b-kube-api-access-6sg8j\") pod \"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b\" (UID: \"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b\") " Jan 30 00:30:04 crc kubenswrapper[5103]: I0130 00:30:04.839343 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b-kube-api-access-6sg8j" (OuterVolumeSpecName: "kube-api-access-6sg8j") pod "b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b" (UID: "b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b"). InnerVolumeSpecName "kube-api-access-6sg8j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:04 crc kubenswrapper[5103]: I0130 00:30:04.931793 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6sg8j\" (UniqueName: \"kubernetes.io/projected/b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b-kube-api-access-6sg8j\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:05 crc kubenswrapper[5103]: I0130 00:30:05.428774 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" event={"ID":"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b","Type":"ContainerDied","Data":"65796c5027130313151c38769c983755d4471154a7b9d3fb4f885fb6d7f10ea5"} Jan 30 00:30:05 crc kubenswrapper[5103]: I0130 00:30:05.428845 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65796c5027130313151c38769c983755d4471154a7b9d3fb4f885fb6d7f10ea5" Jan 30 00:30:05 crc kubenswrapper[5103]: I0130 00:30:05.428794 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:05 crc kubenswrapper[5103]: I0130 00:30:05.790738 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kj6vw"] Jan 30 00:30:05 crc kubenswrapper[5103]: I0130 00:30:05.794275 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kj6vw"] Jan 30 00:30:06 crc kubenswrapper[5103]: I0130 00:30:06.879271 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad58695-120d-466b-bec0-3198637da77d" path="/var/lib/kubelet/pods/5ad58695-120d-466b-bec0-3198637da77d/volumes" Jan 30 00:30:10 crc kubenswrapper[5103]: E0130 00:30:09.870894 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:30:11 crc kubenswrapper[5103]: I0130 00:30:11.545769 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swfns_a7dd7e02-4357-4643-8c23-2fb57ba70405/kube-multus/0.log" Jan 30 00:30:11 crc kubenswrapper[5103]: I0130 00:30:11.545812 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swfns_a7dd7e02-4357-4643-8c23-2fb57ba70405/kube-multus/0.log" Jan 30 00:30:11 crc kubenswrapper[5103]: I0130 00:30:11.548151 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:30:11 crc kubenswrapper[5103]: I0130 00:30:11.548458 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:30:19 crc kubenswrapper[5103]: I0130 00:30:19.594925 5103 scope.go:117] "RemoveContainer" containerID="cc6d50dd8cf2d79869118c21971c35ee57934965ea393fbb5dc64b460746ac0e" Jan 30 00:30:20 crc kubenswrapper[5103]: E0130 00:30:20.883908 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:30:32 crc kubenswrapper[5103]: E0130 00:30:32.621709 5103 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:30:32 crc kubenswrapper[5103]: E0130 00:30:32.622560 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_openshift-marketplace(969009ac-f9ae-48c0-b45e-bf9a5844b7ff): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:30:32 crc kubenswrapper[5103]: E0130 00:30:32.624523 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:30:36 crc kubenswrapper[5103]: I0130 00:30:36.685478 5103 generic.go:358] "Generic (PLEG): container finished" podID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerID="93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90" exitCode=0 Jan 30 00:30:36 crc kubenswrapper[5103]: I0130 00:30:36.685588 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-twd9h/must-gather-9ltrv" event={"ID":"5d4f962b-cbec-41d6-9514-8d19a9455156","Type":"ContainerDied","Data":"93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90"} Jan 30 00:30:36 crc kubenswrapper[5103]: I0130 00:30:36.686431 5103 scope.go:117] "RemoveContainer" containerID="93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90" Jan 30 00:30:37 crc kubenswrapper[5103]: I0130 00:30:37.144749 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-twd9h_must-gather-9ltrv_5d4f962b-cbec-41d6-9514-8d19a9455156/gather/0.log" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.256646 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-twd9h/must-gather-9ltrv"] Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.257679 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-twd9h/must-gather-9ltrv" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="copy" containerID="cri-o://47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da" gracePeriod=2 Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.262031 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-twd9h/must-gather-9ltrv"] Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.264432 5103 status_manager.go:895] "Failed to get status for pod" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" pod="openshift-must-gather-twd9h/must-gather-9ltrv" err="pods \"must-gather-9ltrv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-twd9h\": no relationship found between node 'crc' and this object" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.584033 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-twd9h_must-gather-9ltrv_5d4f962b-cbec-41d6-9514-8d19a9455156/copy/0.log" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.585002 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.586676 5103 status_manager.go:895] "Failed to get status for pod" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" pod="openshift-must-gather-twd9h/must-gather-9ltrv" err="pods \"must-gather-9ltrv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-twd9h\": no relationship found between node 'crc' and this object" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.742217 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-twd9h_must-gather-9ltrv_5d4f962b-cbec-41d6-9514-8d19a9455156/copy/0.log" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.742690 5103 generic.go:358] "Generic (PLEG): container finished" podID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerID="47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da" exitCode=143 Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.742761 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.742912 5103 scope.go:117] "RemoveContainer" containerID="47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.744469 5103 status_manager.go:895] "Failed to get status for pod" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" pod="openshift-must-gather-twd9h/must-gather-9ltrv" err="pods \"must-gather-9ltrv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-twd9h\": no relationship found between node 'crc' and this object" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.750001 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d4f962b-cbec-41d6-9514-8d19a9455156-must-gather-output\") pod \"5d4f962b-cbec-41d6-9514-8d19a9455156\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.750096 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59tff\" (UniqueName: \"kubernetes.io/projected/5d4f962b-cbec-41d6-9514-8d19a9455156-kube-api-access-59tff\") pod \"5d4f962b-cbec-41d6-9514-8d19a9455156\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.760252 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4f962b-cbec-41d6-9514-8d19a9455156-kube-api-access-59tff" (OuterVolumeSpecName: "kube-api-access-59tff") pod "5d4f962b-cbec-41d6-9514-8d19a9455156" (UID: "5d4f962b-cbec-41d6-9514-8d19a9455156"). InnerVolumeSpecName "kube-api-access-59tff". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.763156 5103 scope.go:117] "RemoveContainer" containerID="93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.792026 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d4f962b-cbec-41d6-9514-8d19a9455156-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5d4f962b-cbec-41d6-9514-8d19a9455156" (UID: "5d4f962b-cbec-41d6-9514-8d19a9455156"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.814886 5103 scope.go:117] "RemoveContainer" containerID="47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da" Jan 30 00:30:43 crc kubenswrapper[5103]: E0130 00:30:43.815275 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da\": container with ID starting with 47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da not found: ID does not exist" containerID="47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.815517 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da"} err="failed to get container status \"47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da\": rpc error: code = NotFound desc = could not find container \"47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da\": container with ID starting with 47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da not found: ID does not exist" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.815597 5103 scope.go:117] "RemoveContainer" containerID="93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90" Jan 30 00:30:43 crc kubenswrapper[5103]: E0130 00:30:43.815949 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90\": container with ID starting with 93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90 not found: ID does not exist" containerID="93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.815977 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90"} err="failed to get container status \"93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90\": rpc error: code = NotFound desc = could not find container \"93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90\": container with ID starting with 93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90 not found: ID does not exist" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.851374 5103 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d4f962b-cbec-41d6-9514-8d19a9455156-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.851409 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-59tff\" (UniqueName: \"kubernetes.io/projected/5d4f962b-cbec-41d6-9514-8d19a9455156-kube-api-access-59tff\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:44 crc kubenswrapper[5103]: I0130 00:30:44.061971 5103 status_manager.go:895] "Failed to get status for pod" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" pod="openshift-must-gather-twd9h/must-gather-9ltrv" err="pods \"must-gather-9ltrv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-twd9h\": no relationship found between node 'crc' and this object" Jan 30 00:30:44 crc kubenswrapper[5103]: E0130 00:30:44.873190 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:30:44 crc kubenswrapper[5103]: I0130 00:30:44.882000 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" path="/var/lib/kubelet/pods/5d4f962b-cbec-41d6-9514-8d19a9455156/volumes" Jan 30 00:30:44 crc kubenswrapper[5103]: I0130 00:30:44.899195 5103 status_manager.go:895] "Failed to get status for pod" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" pod="openshift-must-gather-twd9h/must-gather-9ltrv" err="pods \"must-gather-9ltrv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-twd9h\": no relationship found between node 'crc' and this object" Jan 30 00:30:57 crc kubenswrapper[5103]: E0130 00:30:57.870525 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:31:09 crc kubenswrapper[5103]: I0130 00:31:09.974915 5103 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:31:09 crc kubenswrapper[5103]: E0130 00:31:09.976011 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:31:23 crc kubenswrapper[5103]: E0130 00:31:23.871021 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:31:35 crc kubenswrapper[5103]: E0130 00:31:35.872231 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:31:48 crc kubenswrapper[5103]: E0130 00:31:48.871007 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:31:58 crc kubenswrapper[5103]: I0130 00:31:58.493235 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:31:58 crc kubenswrapper[5103]: I0130 00:31:58.493619 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.148228 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495552-j4vvz"] Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149427 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b" containerName="oc" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149449 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b" containerName="oc" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149482 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="gather" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149491 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="gather" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149506 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="copy" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149515 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="copy" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149538 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9a13ac8-6221-4293-b335-523278207648" containerName="collect-profiles" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149546 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a13ac8-6221-4293-b335-523278207648" containerName="collect-profiles" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149655 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="d9a13ac8-6221-4293-b335-523278207648" containerName="collect-profiles" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149670 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="copy" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149683 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="gather" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149695 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b" containerName="oc" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.154643 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.157255 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.158394 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.158552 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.170708 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495552-j4vvz"] Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.271245 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8jtc\" (UniqueName: \"kubernetes.io/projected/ceb0ecdd-c611-4860-853f-570beffcf4e5-kube-api-access-d8jtc\") pod \"auto-csr-approver-29495552-j4vvz\" (UID: \"ceb0ecdd-c611-4860-853f-570beffcf4e5\") " pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.373902 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d8jtc\" (UniqueName: \"kubernetes.io/projected/ceb0ecdd-c611-4860-853f-570beffcf4e5-kube-api-access-d8jtc\") pod \"auto-csr-approver-29495552-j4vvz\" (UID: \"ceb0ecdd-c611-4860-853f-570beffcf4e5\") " pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.395668 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8jtc\" (UniqueName: \"kubernetes.io/projected/ceb0ecdd-c611-4860-853f-570beffcf4e5-kube-api-access-d8jtc\") pod \"auto-csr-approver-29495552-j4vvz\" (UID: \"ceb0ecdd-c611-4860-853f-570beffcf4e5\") " pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.486660 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.766908 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495552-j4vvz"] Jan 30 00:32:00 crc kubenswrapper[5103]: E0130 00:32:00.880869 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:32:01 crc kubenswrapper[5103]: I0130 00:32:01.428938 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" event={"ID":"ceb0ecdd-c611-4860-853f-570beffcf4e5","Type":"ContainerStarted","Data":"2538de0811533f2e719ed480e05bdab21c82292f7102f3f09b96fd3e8a3f6c42"} Jan 30 00:32:02 crc kubenswrapper[5103]: I0130 00:32:02.437708 5103 generic.go:358] "Generic (PLEG): container finished" podID="ceb0ecdd-c611-4860-853f-570beffcf4e5" containerID="7ee18343af479626b3c9134413db1d2ecff31943eaa3b062712f47bcf2e15ba3" exitCode=0 Jan 30 00:32:02 crc kubenswrapper[5103]: I0130 00:32:02.437848 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" event={"ID":"ceb0ecdd-c611-4860-853f-570beffcf4e5","Type":"ContainerDied","Data":"7ee18343af479626b3c9134413db1d2ecff31943eaa3b062712f47bcf2e15ba3"} Jan 30 00:32:03 crc kubenswrapper[5103]: I0130 00:32:03.746376 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:03 crc kubenswrapper[5103]: I0130 00:32:03.823698 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8jtc\" (UniqueName: \"kubernetes.io/projected/ceb0ecdd-c611-4860-853f-570beffcf4e5-kube-api-access-d8jtc\") pod \"ceb0ecdd-c611-4860-853f-570beffcf4e5\" (UID: \"ceb0ecdd-c611-4860-853f-570beffcf4e5\") " Jan 30 00:32:03 crc kubenswrapper[5103]: I0130 00:32:03.831032 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb0ecdd-c611-4860-853f-570beffcf4e5-kube-api-access-d8jtc" (OuterVolumeSpecName: "kube-api-access-d8jtc") pod "ceb0ecdd-c611-4860-853f-570beffcf4e5" (UID: "ceb0ecdd-c611-4860-853f-570beffcf4e5"). InnerVolumeSpecName "kube-api-access-d8jtc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:32:03 crc kubenswrapper[5103]: I0130 00:32:03.926258 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d8jtc\" (UniqueName: \"kubernetes.io/projected/ceb0ecdd-c611-4860-853f-570beffcf4e5-kube-api-access-d8jtc\") on node \"crc\" DevicePath \"\"" Jan 30 00:32:04 crc kubenswrapper[5103]: I0130 00:32:04.456179 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:04 crc kubenswrapper[5103]: I0130 00:32:04.456230 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" event={"ID":"ceb0ecdd-c611-4860-853f-570beffcf4e5","Type":"ContainerDied","Data":"2538de0811533f2e719ed480e05bdab21c82292f7102f3f09b96fd3e8a3f6c42"} Jan 30 00:32:04 crc kubenswrapper[5103]: I0130 00:32:04.456746 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2538de0811533f2e719ed480e05bdab21c82292f7102f3f09b96fd3e8a3f6c42" Jan 30 00:32:04 crc kubenswrapper[5103]: I0130 00:32:04.829828 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-b8rgh"] Jan 30 00:32:04 crc kubenswrapper[5103]: I0130 00:32:04.839080 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-b8rgh"] Jan 30 00:32:04 crc kubenswrapper[5103]: I0130 00:32:04.877647 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b28226-5bd7-4b43-aec3-648633cbde03" path="/var/lib/kubelet/pods/d4b28226-5bd7-4b43-aec3-648633cbde03/volumes" Jan 30 00:32:11 crc kubenswrapper[5103]: E0130 00:32:11.871148 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:32:19 crc kubenswrapper[5103]: I0130 00:32:19.757392 5103 scope.go:117] "RemoveContainer" containerID="013351321e5d41d2ce75b5cd9d1d61d2f2152944d779218c070bb3e09843c3f2" Jan 30 00:32:26 crc kubenswrapper[5103]: E0130 00:32:26.871882 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:32:28 crc kubenswrapper[5103]: I0130 00:32:28.492960 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:32:28 crc kubenswrapper[5103]: I0130 00:32:28.493043 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:32:37 crc kubenswrapper[5103]: E0130 00:32:37.870200 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:32:49 crc kubenswrapper[5103]: E0130 00:32:49.873431 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:32:58 crc kubenswrapper[5103]: I0130 00:32:58.492916 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:32:58 crc kubenswrapper[5103]: I0130 00:32:58.493674 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:32:58 crc kubenswrapper[5103]: I0130 00:32:58.493736 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:32:58 crc kubenswrapper[5103]: I0130 00:32:58.494707 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"130ec4cee10ecca2dbb2494485497c4c74cdc5486f58130b4af70c708c33184f"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:32:58 crc kubenswrapper[5103]: I0130 00:32:58.494843 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://130ec4cee10ecca2dbb2494485497c4c74cdc5486f58130b4af70c708c33184f" gracePeriod=600 Jan 30 00:32:58 crc kubenswrapper[5103]: I0130 00:32:58.650964 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="130ec4cee10ecca2dbb2494485497c4c74cdc5486f58130b4af70c708c33184f" exitCode=0 Jan 30 00:32:58 crc kubenswrapper[5103]: I0130 00:32:58.651264 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"130ec4cee10ecca2dbb2494485497c4c74cdc5486f58130b4af70c708c33184f"} Jan 30 00:32:58 crc kubenswrapper[5103]: I0130 00:32:58.651306 5103 scope.go:117] "RemoveContainer" containerID="5856a55629d30d7d4fa88897c40b9c60d5a5ab3108816f0b3070ca166d2f7fac" Jan 30 00:32:59 crc kubenswrapper[5103]: I0130 00:32:59.662855 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"66da709ebcd84f186ba375568f8966eb412e41c661f82ab3da89755f8fee12af"} Jan 30 00:33:00 crc kubenswrapper[5103]: E0130 00:33:00.877888 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.208072 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mdlnc"] Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.209527 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ceb0ecdd-c611-4860-853f-570beffcf4e5" containerName="oc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.209544 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb0ecdd-c611-4860-853f-570beffcf4e5" containerName="oc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.209686 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="ceb0ecdd-c611-4860-853f-570beffcf4e5" containerName="oc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.214244 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.218380 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mdlnc"] Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.343037 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bv8f\" (UniqueName: \"kubernetes.io/projected/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-kube-api-access-8bv8f\") pod \"certified-operators-mdlnc\" (UID: \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\") " pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.343148 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-utilities\") pod \"certified-operators-mdlnc\" (UID: \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\") " pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.343178 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-catalog-content\") pod \"certified-operators-mdlnc\" (UID: \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\") " pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.444923 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8bv8f\" (UniqueName: \"kubernetes.io/projected/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-kube-api-access-8bv8f\") pod \"certified-operators-mdlnc\" (UID: \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\") " pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.444975 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-utilities\") pod \"certified-operators-mdlnc\" (UID: \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\") " pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.445003 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-catalog-content\") pod \"certified-operators-mdlnc\" (UID: \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\") " pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.445464 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-catalog-content\") pod \"certified-operators-mdlnc\" (UID: \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\") " pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.445526 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-utilities\") pod \"certified-operators-mdlnc\" (UID: \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\") " pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.481559 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bv8f\" (UniqueName: \"kubernetes.io/projected/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-kube-api-access-8bv8f\") pod \"certified-operators-mdlnc\" (UID: \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\") " pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.563330 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:08 crc kubenswrapper[5103]: I0130 00:33:08.769672 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mdlnc"] Jan 30 00:33:09 crc kubenswrapper[5103]: I0130 00:33:09.749485 5103 generic.go:358] "Generic (PLEG): container finished" podID="0a8c29b6-c736-4dec-9de3-8784ad3b99b2" containerID="f05921948d7798459f8d8843e6da1a1e40c643e31b9572aff977a3fb7d221b5f" exitCode=0 Jan 30 00:33:09 crc kubenswrapper[5103]: I0130 00:33:09.749664 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mdlnc" event={"ID":"0a8c29b6-c736-4dec-9de3-8784ad3b99b2","Type":"ContainerDied","Data":"f05921948d7798459f8d8843e6da1a1e40c643e31b9572aff977a3fb7d221b5f"} Jan 30 00:33:09 crc kubenswrapper[5103]: I0130 00:33:09.749740 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mdlnc" event={"ID":"0a8c29b6-c736-4dec-9de3-8784ad3b99b2","Type":"ContainerStarted","Data":"a7a879cad88aba6fb31d0ec333d1515d43b4678a11fe602b33cb53c28b20580f"} Jan 30 00:33:10 crc kubenswrapper[5103]: I0130 00:33:10.762308 5103 generic.go:358] "Generic (PLEG): container finished" podID="0a8c29b6-c736-4dec-9de3-8784ad3b99b2" containerID="28e8668a60c482b80d97f908c8a1b980bf861de9d395b229c75a6631445650b7" exitCode=0 Jan 30 00:33:10 crc kubenswrapper[5103]: I0130 00:33:10.762399 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mdlnc" event={"ID":"0a8c29b6-c736-4dec-9de3-8784ad3b99b2","Type":"ContainerDied","Data":"28e8668a60c482b80d97f908c8a1b980bf861de9d395b229c75a6631445650b7"} Jan 30 00:33:11 crc kubenswrapper[5103]: I0130 00:33:11.772393 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mdlnc" event={"ID":"0a8c29b6-c736-4dec-9de3-8784ad3b99b2","Type":"ContainerStarted","Data":"94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b"} Jan 30 00:33:11 crc kubenswrapper[5103]: I0130 00:33:11.790583 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mdlnc" podStartSLOduration=3.170547621 podStartE2EDuration="3.790564672s" podCreationTimestamp="2026-01-30 00:33:08 +0000 UTC" firstStartedPulling="2026-01-30 00:33:09.751200015 +0000 UTC m=+1379.622698117" lastFinishedPulling="2026-01-30 00:33:10.371217106 +0000 UTC m=+1380.242715168" observedRunningTime="2026-01-30 00:33:11.787526617 +0000 UTC m=+1381.659024709" watchObservedRunningTime="2026-01-30 00:33:11.790564672 +0000 UTC m=+1381.662062734" Jan 30 00:33:15 crc kubenswrapper[5103]: E0130 00:33:15.871106 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:33:18 crc kubenswrapper[5103]: I0130 00:33:18.564012 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:18 crc kubenswrapper[5103]: I0130 00:33:18.564596 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:18 crc kubenswrapper[5103]: I0130 00:33:18.633044 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:18 crc kubenswrapper[5103]: I0130 00:33:18.890521 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:18 crc kubenswrapper[5103]: I0130 00:33:18.953270 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mdlnc"] Jan 30 00:33:20 crc kubenswrapper[5103]: I0130 00:33:20.844305 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mdlnc" podUID="0a8c29b6-c736-4dec-9de3-8784ad3b99b2" containerName="registry-server" containerID="cri-o://94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b" gracePeriod=2 Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.217237 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.349375 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bv8f\" (UniqueName: \"kubernetes.io/projected/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-kube-api-access-8bv8f\") pod \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\" (UID: \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\") " Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.349749 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-utilities\") pod \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\" (UID: \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\") " Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.349945 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-catalog-content\") pod \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\" (UID: \"0a8c29b6-c736-4dec-9de3-8784ad3b99b2\") " Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.351935 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-utilities" (OuterVolumeSpecName: "utilities") pod "0a8c29b6-c736-4dec-9de3-8784ad3b99b2" (UID: "0a8c29b6-c736-4dec-9de3-8784ad3b99b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.359073 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-kube-api-access-8bv8f" (OuterVolumeSpecName: "kube-api-access-8bv8f") pod "0a8c29b6-c736-4dec-9de3-8784ad3b99b2" (UID: "0a8c29b6-c736-4dec-9de3-8784ad3b99b2"). InnerVolumeSpecName "kube-api-access-8bv8f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.407718 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a8c29b6-c736-4dec-9de3-8784ad3b99b2" (UID: "0a8c29b6-c736-4dec-9de3-8784ad3b99b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.451723 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.451820 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8bv8f\" (UniqueName: \"kubernetes.io/projected/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-kube-api-access-8bv8f\") on node \"crc\" DevicePath \"\"" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.451836 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8c29b6-c736-4dec-9de3-8784ad3b99b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.853958 5103 generic.go:358] "Generic (PLEG): container finished" podID="0a8c29b6-c736-4dec-9de3-8784ad3b99b2" containerID="94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b" exitCode=0 Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.854187 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mdlnc" event={"ID":"0a8c29b6-c736-4dec-9de3-8784ad3b99b2","Type":"ContainerDied","Data":"94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b"} Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.854227 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mdlnc" event={"ID":"0a8c29b6-c736-4dec-9de3-8784ad3b99b2","Type":"ContainerDied","Data":"a7a879cad88aba6fb31d0ec333d1515d43b4678a11fe602b33cb53c28b20580f"} Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.854257 5103 scope.go:117] "RemoveContainer" containerID="94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.854488 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mdlnc" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.905771 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mdlnc"] Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.911456 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mdlnc"] Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.925383 5103 scope.go:117] "RemoveContainer" containerID="28e8668a60c482b80d97f908c8a1b980bf861de9d395b229c75a6631445650b7" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.962803 5103 scope.go:117] "RemoveContainer" containerID="f05921948d7798459f8d8843e6da1a1e40c643e31b9572aff977a3fb7d221b5f" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.983283 5103 scope.go:117] "RemoveContainer" containerID="94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b" Jan 30 00:33:21 crc kubenswrapper[5103]: E0130 00:33:21.983868 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b\": container with ID starting with 94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b not found: ID does not exist" containerID="94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.983925 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b"} err="failed to get container status \"94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b\": rpc error: code = NotFound desc = could not find container \"94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b\": container with ID starting with 94ca51e8b3d0878a20bb5a37f436caf9e5458025502708d683e6fe379c73325b not found: ID does not exist" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.983948 5103 scope.go:117] "RemoveContainer" containerID="28e8668a60c482b80d97f908c8a1b980bf861de9d395b229c75a6631445650b7" Jan 30 00:33:21 crc kubenswrapper[5103]: E0130 00:33:21.984270 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e8668a60c482b80d97f908c8a1b980bf861de9d395b229c75a6631445650b7\": container with ID starting with 28e8668a60c482b80d97f908c8a1b980bf861de9d395b229c75a6631445650b7 not found: ID does not exist" containerID="28e8668a60c482b80d97f908c8a1b980bf861de9d395b229c75a6631445650b7" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.984412 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e8668a60c482b80d97f908c8a1b980bf861de9d395b229c75a6631445650b7"} err="failed to get container status \"28e8668a60c482b80d97f908c8a1b980bf861de9d395b229c75a6631445650b7\": rpc error: code = NotFound desc = could not find container \"28e8668a60c482b80d97f908c8a1b980bf861de9d395b229c75a6631445650b7\": container with ID starting with 28e8668a60c482b80d97f908c8a1b980bf861de9d395b229c75a6631445650b7 not found: ID does not exist" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.984532 5103 scope.go:117] "RemoveContainer" containerID="f05921948d7798459f8d8843e6da1a1e40c643e31b9572aff977a3fb7d221b5f" Jan 30 00:33:21 crc kubenswrapper[5103]: E0130 00:33:21.984974 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f05921948d7798459f8d8843e6da1a1e40c643e31b9572aff977a3fb7d221b5f\": container with ID starting with f05921948d7798459f8d8843e6da1a1e40c643e31b9572aff977a3fb7d221b5f not found: ID does not exist" containerID="f05921948d7798459f8d8843e6da1a1e40c643e31b9572aff977a3fb7d221b5f" Jan 30 00:33:21 crc kubenswrapper[5103]: I0130 00:33:21.984999 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05921948d7798459f8d8843e6da1a1e40c643e31b9572aff977a3fb7d221b5f"} err="failed to get container status \"f05921948d7798459f8d8843e6da1a1e40c643e31b9572aff977a3fb7d221b5f\": rpc error: code = NotFound desc = could not find container \"f05921948d7798459f8d8843e6da1a1e40c643e31b9572aff977a3fb7d221b5f\": container with ID starting with f05921948d7798459f8d8843e6da1a1e40c643e31b9572aff977a3fb7d221b5f not found: ID does not exist" Jan 30 00:33:22 crc kubenswrapper[5103]: I0130 00:33:22.878205 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a8c29b6-c736-4dec-9de3-8784ad3b99b2" path="/var/lib/kubelet/pods/0a8c29b6-c736-4dec-9de3-8784ad3b99b2/volumes" Jan 30 00:33:27 crc kubenswrapper[5103]: E0130 00:33:27.870344 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:33:41 crc kubenswrapper[5103]: E0130 00:33:41.870172 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:33:52 crc kubenswrapper[5103]: E0130 00:33:52.871652 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.149192 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495554-rfmhq"] Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.152141 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0a8c29b6-c736-4dec-9de3-8784ad3b99b2" containerName="registry-server" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.152361 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8c29b6-c736-4dec-9de3-8784ad3b99b2" containerName="registry-server" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.152648 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0a8c29b6-c736-4dec-9de3-8784ad3b99b2" containerName="extract-utilities" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.152846 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8c29b6-c736-4dec-9de3-8784ad3b99b2" containerName="extract-utilities" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.153143 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0a8c29b6-c736-4dec-9de3-8784ad3b99b2" containerName="extract-content" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.153354 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8c29b6-c736-4dec-9de3-8784ad3b99b2" containerName="extract-content" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.153672 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="0a8c29b6-c736-4dec-9de3-8784ad3b99b2" containerName="registry-server" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.160688 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495554-rfmhq" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.161432 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495554-rfmhq"] Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.164500 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.164754 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.164859 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.310977 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmsvd\" (UniqueName: \"kubernetes.io/projected/daa0f1c3-2fa4-41f0-bafe-afc391819e4d-kube-api-access-tmsvd\") pod \"auto-csr-approver-29495554-rfmhq\" (UID: \"daa0f1c3-2fa4-41f0-bafe-afc391819e4d\") " pod="openshift-infra/auto-csr-approver-29495554-rfmhq" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.412452 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmsvd\" (UniqueName: \"kubernetes.io/projected/daa0f1c3-2fa4-41f0-bafe-afc391819e4d-kube-api-access-tmsvd\") pod \"auto-csr-approver-29495554-rfmhq\" (UID: \"daa0f1c3-2fa4-41f0-bafe-afc391819e4d\") " pod="openshift-infra/auto-csr-approver-29495554-rfmhq" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.449553 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmsvd\" (UniqueName: \"kubernetes.io/projected/daa0f1c3-2fa4-41f0-bafe-afc391819e4d-kube-api-access-tmsvd\") pod \"auto-csr-approver-29495554-rfmhq\" (UID: \"daa0f1c3-2fa4-41f0-bafe-afc391819e4d\") " pod="openshift-infra/auto-csr-approver-29495554-rfmhq" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.498836 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495554-rfmhq" Jan 30 00:34:00 crc kubenswrapper[5103]: I0130 00:34:00.741247 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495554-rfmhq"] Jan 30 00:34:00 crc kubenswrapper[5103]: W0130 00:34:00.747536 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaa0f1c3_2fa4_41f0_bafe_afc391819e4d.slice/crio-f984f7a6a03d4e9b38a72414a7ac40d6c88692fe777ec655656cba35ab7d337c WatchSource:0}: Error finding container f984f7a6a03d4e9b38a72414a7ac40d6c88692fe777ec655656cba35ab7d337c: Status 404 returned error can't find the container with id f984f7a6a03d4e9b38a72414a7ac40d6c88692fe777ec655656cba35ab7d337c Jan 30 00:34:01 crc kubenswrapper[5103]: I0130 00:34:01.207325 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495554-rfmhq" event={"ID":"daa0f1c3-2fa4-41f0-bafe-afc391819e4d","Type":"ContainerStarted","Data":"f984f7a6a03d4e9b38a72414a7ac40d6c88692fe777ec655656cba35ab7d337c"} Jan 30 00:34:02 crc kubenswrapper[5103]: I0130 00:34:02.214544 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495554-rfmhq" event={"ID":"daa0f1c3-2fa4-41f0-bafe-afc391819e4d","Type":"ContainerStarted","Data":"e37889c5e602835b005c1e51c77dc12d77a0c1b0a5748497e560b80f1d976aae"} Jan 30 00:34:02 crc kubenswrapper[5103]: I0130 00:34:02.232414 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495554-rfmhq" podStartSLOduration=1.26992371 podStartE2EDuration="2.232399745s" podCreationTimestamp="2026-01-30 00:34:00 +0000 UTC" firstStartedPulling="2026-01-30 00:34:00.749492329 +0000 UTC m=+1430.620990391" lastFinishedPulling="2026-01-30 00:34:01.711968354 +0000 UTC m=+1431.583466426" observedRunningTime="2026-01-30 00:34:02.227321129 +0000 UTC m=+1432.098819191" watchObservedRunningTime="2026-01-30 00:34:02.232399745 +0000 UTC m=+1432.103897797" Jan 30 00:34:03 crc kubenswrapper[5103]: I0130 00:34:03.227280 5103 generic.go:358] "Generic (PLEG): container finished" podID="daa0f1c3-2fa4-41f0-bafe-afc391819e4d" containerID="e37889c5e602835b005c1e51c77dc12d77a0c1b0a5748497e560b80f1d976aae" exitCode=0 Jan 30 00:34:03 crc kubenswrapper[5103]: I0130 00:34:03.227442 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495554-rfmhq" event={"ID":"daa0f1c3-2fa4-41f0-bafe-afc391819e4d","Type":"ContainerDied","Data":"e37889c5e602835b005c1e51c77dc12d77a0c1b0a5748497e560b80f1d976aae"} Jan 30 00:34:04 crc kubenswrapper[5103]: I0130 00:34:04.536386 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495554-rfmhq" Jan 30 00:34:04 crc kubenswrapper[5103]: I0130 00:34:04.577067 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmsvd\" (UniqueName: \"kubernetes.io/projected/daa0f1c3-2fa4-41f0-bafe-afc391819e4d-kube-api-access-tmsvd\") pod \"daa0f1c3-2fa4-41f0-bafe-afc391819e4d\" (UID: \"daa0f1c3-2fa4-41f0-bafe-afc391819e4d\") " Jan 30 00:34:04 crc kubenswrapper[5103]: I0130 00:34:04.584904 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa0f1c3-2fa4-41f0-bafe-afc391819e4d-kube-api-access-tmsvd" (OuterVolumeSpecName: "kube-api-access-tmsvd") pod "daa0f1c3-2fa4-41f0-bafe-afc391819e4d" (UID: "daa0f1c3-2fa4-41f0-bafe-afc391819e4d"). InnerVolumeSpecName "kube-api-access-tmsvd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:34:04 crc kubenswrapper[5103]: I0130 00:34:04.678796 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tmsvd\" (UniqueName: \"kubernetes.io/projected/daa0f1c3-2fa4-41f0-bafe-afc391819e4d-kube-api-access-tmsvd\") on node \"crc\" DevicePath \"\"" Jan 30 00:34:04 crc kubenswrapper[5103]: E0130 00:34:04.873191 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:34:05 crc kubenswrapper[5103]: I0130 00:34:05.244216 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495554-rfmhq" event={"ID":"daa0f1c3-2fa4-41f0-bafe-afc391819e4d","Type":"ContainerDied","Data":"f984f7a6a03d4e9b38a72414a7ac40d6c88692fe777ec655656cba35ab7d337c"} Jan 30 00:34:05 crc kubenswrapper[5103]: I0130 00:34:05.244279 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f984f7a6a03d4e9b38a72414a7ac40d6c88692fe777ec655656cba35ab7d337c" Jan 30 00:34:05 crc kubenswrapper[5103]: I0130 00:34:05.244279 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495554-rfmhq" Jan 30 00:34:05 crc kubenswrapper[5103]: I0130 00:34:05.304396 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495548-mfl5j"] Jan 30 00:34:05 crc kubenswrapper[5103]: I0130 00:34:05.312300 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495548-mfl5j"] Jan 30 00:34:06 crc kubenswrapper[5103]: I0130 00:34:06.880964 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e1187f4-b882-49e8-b76a-6a33d208d851" path="/var/lib/kubelet/pods/7e1187f4-b882-49e8-b76a-6a33d208d851/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136776213024460 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136776214017376 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136772715016523 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136772715015473 5ustar corecore