var/home/core/zuul-output/0000755000175000017500000000000015134371112014523 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134372752015502 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000071135615134372714020272 0ustar corecoreqikubelet.log][s8~_˾4m^@L֗tӱ&It 0& _? %ˉ;u S;՝Ȓ}܁wVN|&BM:4Mˋwɴeu+n뒷y{xx_xlnkVU=\=l1qT-%+[QpY%EN8/'jgLͤb5+4K|lvl.t.O\pн~s M?\'S'˶WGG7;(isԲ9bY!JQ;iީ?g /sd.I!k>%`GGa]5Jm;YW{z>4~D>,YM֏9ogs55/ I6wmWn"ysYBN&̦|c݊v mT2s jEvzAx8п`L\W|oUcTfx `x zY8'e9>}ty9gJdWIWR 0|b>jrҵPuf,*d*Ւoi?^eWS,!.ٛEوŲmD ,U.żٔ]-FR(RYYt ;I`)]^p_?*)CQL 69Gyv6G#~8%;sy˳\7k^_Ɍxo9ӒU [d5]ʺ=u^ZD{9–(9t4ίV%e}8 YlpU{MS)ʶVPtl`P `u:ɠNT} C˩(3q[w)~t`ı8twCEX%]}IOkv3Q\61n!TI 5MNWgYSk4_z@PK=odB%)D9kh ,`ѯ#0LjFΩXG^HZ ah.`MAu{^I8A\،]Um$?UB&ZT[r{<k+5~x,E~am<[yD4>Rʛ١ ~ ^ux|>!I:?eK4v?S:tpvQ^ %_^u1#DdR&xYD&VȀG 7Wr I&x+(*1.10PF!թ O-欯 cU׀(ZV;<)ah/|AQ Ye~byRDs,XeL,d"'dHyVzr61Ňw֙d%3Ř@.e)SSGJt8i {}qޞ~e‘q& qBɃU?>Yjdίu]2#kݏ%ч ,u E py/Ab'-JX$Jķꁡhy+^)[q뗄9+. D%8ƓvZ*JEF TEq|@{Ѯ$BtZףI)s92q5>GgiG53(j(CrD{1so_c) \2,Ȁ |8H<2]$~<:=Hp\t$#84wce;Gdl'LL1+>L|DPz#Iu6≏(P q;<$bU{Sk>N. x.nc70'U D۾H`7OIasU/$13sT'-`KC}st|, Ч@8"m`ZG nT?fe&#?D]WT00+G}c%ަ[mñݛ٠Wr1[w9n۰]҅ t:Y-x9>_Α+a;f]" >$d.@>V:-6C묗0 +q] Lhro5$~#,(vw$jrG2HvaKI?ZZSD#G$rNS pi\/qB>]n9QZ wI(ڑP4[@$w%ホX5!';Be1eVi7TsvOW狊!TXLDEO:N.s|\?x.#'iC7 #G71,,^^}am ΆqF(d|-%8F;ry~-$ ҝ`Η63 LZGyWHd|ơ&2g]+k{/P.P* Q ѣY8|ϥ_ev4RXz>LbIݿG ]ч U A#n"]͝fe# ώL;Pm±g}58dj;:n|6ԂgzF}#ߖҎXQC&zBƕqTxl ̙ȇʳ&[V9jlٖ'2JX wM`ʠLU3ʦ g,~ù*pZC+WIW,t&V&TiVla,Z?(G:*@ROb)}yecaQ-e*ߑ`74rIBzSrpٶMRB0luX-d|D-.a.eFGMcG86 ;xmiӇNt>i)!Ej08J@bh|hsxUmCB hᬓ`NېW$Ͱk۶@3HAU^I9_892ģnrzz; tZl [f߸y2T,pGWPnOl5[oh*- v 4Nת 8_as40Mx(hɆ^(6E ٍA=F!hSj/= C9ahmT]eη='Ug*g%?>V0U=ֿ(wdȎ%qOc'}z"XzvUZmYb[DiR`ٿ@N%I *U "SAӱf[BA륵^gpH"TU1_Od~H_? \.q}7L01qθ3?0{O0$pkGT1&,d"S *2~Vf5kBoTh:7 46("F>+(6X'_dc+ņ*Y[G 5A`$KB{(ɢ6g]WqM hwT_]iOOva& x}F|:|Zu@bLz F8tVi@?A񇱩Oy$|ku*'| " /v|xJYV"_o\ˋR>T6G"ڪL]k/4ixʢZVd˜ ? W]ۈʬYu G׹O| S8`SW4@__nLgXJU,:z޴7~;TJPĭ|y:3 kTl;xτ1(沿?4t4e(FF4I4.uM Lcu_xA^_3Mq39f%O]Sx͛7>L]5hi:-#Yꚤ} ˱Oh ѡ,~oeTM2Ω2oR*6aI$am!yYhK=l\Rsf[;lJ3çi_9w*5}4@;( T~bǟ}T7+a)AD#i@ C 뎥Opw!. ޛICD/! Li臣jAQO߾#JӐ X(0qU4ј0"ECmB=Z3]>O)G #"h{uJ݈5,h!JbW7oT׼hdM( ɘXRA LݺXvpNPd )t ģC#{*hq!qQcѷ}؀㳿ץ6Tmk JJ7C Y/. H$X: ^[;2sCk^_ayH4~=UFw't¬$&dHy˽qDt F ^DQR(Ac`V^1"~1_e}#k\>,3]/ q*2﫡v4Pd|2~4)&/բfkEQ59yLM l{ui3!My:LK4icKz~/4bD cCYNT[tAlFP& ѹw9/XK"N\ӕJM.W(*L w-mv{r)XdӴ٦LȒhj[^N琔Mى &j[y#=#d$?LNZnуExMx4v+VF (z:ӠA,'ٱу#A ٱ˗C{Uzvۈn `KfTk9`/u Eaiaj#~vhSzi:UزΊ5)>"O߾z#3j>'HR|$i{hDYxdHN{obu!?yt.ˆN8f.apeuw^-g˯K6.Diáu|[#`{SxM5Igpyhh~xt&~}pxãz-0^FP7;L%B}ctD3{!xāb#;&Q+!X#0QOd;fY.7cS,o?#Mc!$'AFl #j ]V\?`09i##ax,kDLP"uLp5C UEl+@aAB7\g?? XxfQ>*߁/`'O;B7AP3<|ҫB X)hFDF`F)B bC8)dC] ^f 5$eB Ic ȄjDvx"^> QD'Aa3$&'`ѰwAa>0=BDA&0 PC" qWGLAơQqG`R9DMp gw XH)2rKu5Жu'% őkk(ıLqpǦxT1ďi`/C$TbhOF5씆jODE9c>u9 a#jS@zoG5|azޛ~d9S 7//Jljj0z%04ԌXyEgBɨ,bf˘9 50 )L`4ޞ"},2Ȼ5t\ݽE@FF-! Y˄ ٺo >.)-P뼬Y ?ЍkdF{DjlN`N(" mV dQZt=r_o!/Z7y_uU=0#AU?r!ɸ]UN ..VcͿw[ 㩆&wT *]}Z{Sa!#yǝ{&\[4fRH֝J-j{| –lzTFh3yb;τZegI'%c,Q0Aj{cjCIrP,T~ZKU6v)_J0'AJvRC)i@;ʄtd>Ig\ t68Oj2 ' uX̛z $ $ o4j|^"}N"sM/8kPGAoR IU9v%95=pL\R^UN rxzQ \Y`nq1Uv?=*Μu wAeEG-Nd-X2Kfh57`gO# GCżFO=f3껬1m,Lֶ3%3F1 O9O ݁f8ꮄ>$._ yYnWPwAu'@A}hRw6uШ#[}h{fղiǶwz{/8QJ!CPA ?PPA NP@AlA ((Cе\\3nu̪;xL׽yCsSg)r|VO|<[`M\MU9\\V.*<<{" i7e@$C+~{Eݐ.8A 챧BC6Vb3^[D `\ŷڋXX)qcUqomgPybȼ,rm, >4epO/,IFvLC5rbdL6| ey3e1-tb79)ޔom.@?u  \jzkqy 2jcݷE|Lgpt{rW:hh E! ~Sr Dp^X'C!Y q8:G(F{GC^7 myσs4-vO>ϜO K:?*6`0:7['rʺ2oޖfajZj Gx-,[BmVɝ7E[rM@YT_,\%3>U"*Ad.Dw,r~|Aܧ9wlncr-| @ o5{P-R$P4QH]&7$TB6 N+rxV"G{$ bN*?R\+)s~ڏ=msT m[HePH$,㺢ͩ@Ƣ)LJ± |Ϋ`UKZ}fxD!?g,^WʊE&kPsK`wQCU%;N]?/Ʌ7p]x,ԁ)V(ld_蠌|ճŸJ^:_$֍{1l|r-Ү=҅J?/Ӫ 3(SiZԒ):7qjp \TX;@lZNcg)̆{ؠKEMfI'@',Nw9J[l`Zn_1[HにrnQn%?q7_WW[C7"BzήyHFtY5 kZ?Hq &LSdxBɚRbhɫuZ8*ƙ%6)=MZ԰3X]Ooֆ9V/pzGC I@ vCg6PN-},r}Yp]otr+puOWyЦz6k$հH_e*yufܕ_nց^HQFՍP`{;+kP5w%[m/(d$\!ŶXfF˺ARTnמ_ܼC[X\V½ 3*oxSTVچ:)`da?c4g /Ԫ~jqoU3l[Unq Y"⶛͈ 4}ņY6p^DlBZaKd8۟7<"2i\|\ bY|DA{ߐmW,Sކ 9ILI~>I*wKFZ= D@Lb6tvjn[NK.CO+ECXHdn;QxjBãCލkj`8R*ޖ=f{t}(%2 ɐڌ){)?K؂iz#ꏤ&E!O6.4^f<< {-q)͝'Z{:_ Y"}"}~ڴ(Q&)`V_⍙8鞥8,˥{ާTUYU;wmMjO?͟ڞ|oXj,>m. Q÷l7mT>kNۛm"}{7۰O/̛LRN>m0U7szi79*qkNg0+wwDolMonR{?!xV3 /wj> m<1g\j~ہl18~ɒ$(5ՏV#[Nc£ؒ)R4 @ʳ竫@-TO \+ηuTn*y8s˟<9=ϝňfj!} ?>`hG=G-l"8)N~K"ϫ Jm~ ϿҘy_ oofj)m6w @ mdcw|+a=)߉ze^/My\:kfcwop:BЖ疋6i;z4 --! i$U:fNc $ط2hg[:NTٟЩ[pYTK3C+MҪ'P1zp7=Q .5)C*4"itoƻ}Fih3"]zl4je@b4 8&dY(g5S$c55rɂ>jVFHBr}܌5Ĝ9QSF%֔.,!dSvΏ :KEe񥽁m# &m Eև@U|ZHa×w>uQ l^%tx}IgKFS.D" 4vpb;4'8 ~)iSNJ[9MqAcUǸ?& d003!f\tqWԪdU.[ K"<ƚf|a0`ŗ^0Zjt LHq\wtJt0 eJTR""Ĉ.)rHꬃNn"Cs,#ru\j%"͢P$]XNl^)PIeGuʍMXYrW_dŨAECS_Q~lkѠwez>{/sI(.j66S lV-- d6 U$ v}"mkJh+=*. : Lz.q:%i/bYw|&TtfvQ$]xĺ R^U_GR_E*߽]pgIq|X?u&i36v;Qwlx?7dbAWLj8$]xyߥG!JY`>$<msIyh`X#A5x|<_EZYYx( ELϼXK5lll0pUYVh hnI#8 d1"e" K(Nl)}.IUnAɚ>~햰2vEP"Cip!j +Eb" RGrN^U'eWe&øM*yb(Gtocs09eΤЀ&>Ȉo`mUpV=w i̘< cT/2As-QF !EL;rf5h&!-K!V/IuI%X>bq2f`&Px 2k7zNCX(U }ڈ(s_ӌ&L: '༒xBA`0 d5 9Is)m3]o5fߩXzrĚ=>] x/  * JN1yl( 4¢s ^`Rf3'HNoSU9ZIq"pjL/6veij|+R2np=k =Z{x'1Ί'P$Z,_B.h7xX!q/kfWV_Up6Acn3: FŖ7qR چ3{R2Mwx-isKRlViEpҕvy] P8)b췫 HZnTܶ\v:>OMS+\e˚Etq=&<͋`B\UvҮWWa.Ƈ}y䮛=L&],-Nqٷ{YgCԱ,*^V:²w01yVR9t!TQ$T9﯊hM!8rO5hӲQ9R{.w.8r=BiX<(cJ&A'+Z$*s(EX*c0BMw]PLw}|ҎYg )"њ׷ ]Uᨒ!K {ͤLKsG'V nGR+,Wytv2}K"S8h =K[ Z /ڑ>zGwUY3,Ei#S/Mp$eFh!_szN9i1QjE>!4,Q*\\IJ{</ :%JS¸A秸iӔ eTXrB w:8RY_5?"a.wղ>R^WǩqKtБftݬaKs'9*Sʐ8zɍw ݨr/x ]{X]=` kWXƷTYF/;-tQ ի(vqHgв +э1e*I8}]pԃ)J( bG^5{\RkTI_W>{Uͺ)}%xxQyF΀JhgU=pRCحݷte0×.=RG/YjB Ʉt߽(7WE0c4\̗}.ERU͆}Bl8OO].6/ܜq$%o9}샣2WŦ9[(C!%^ݜKt({c;gӝӲ?qO0=ejzD!܍ AM* \PF#B1'esx"(5)|kG7xGMl>p GʹeŌ`г^32|s"VogV=ovp8Ojs_^U@;DV!@)r=^5w R?h.,7: M7JozoF>۽&raTHjCf؂dբXZ ڜ*FQ3.yİkO&-aMQ(zGIo)FyM۩%`3GS3*A7]YhCLv$]Rmo"l(| d[\U1 #*.)rRġk9xBnFhn[{xF>-u2gDtfZJuU0"銑,*ڵ'h7a Jk9bid)ٵ̼Zf< ./݋;E؋oŀVW Mi`&+NV 5iv!=sOK;U)uW:s]t3Q3?dtbk +3H"9~Ik'Ǐ+v }.z#wWU8Q e{GW!d_[,f& yNGvahڌ)Q!%{nReyF)Ңm`uXU`$7";kʝ6I:2.z.YbW qF)ea mzAqjYB+aTEҍ97:>yƀd0F'*osE[/mh%ouAWQ'8s$8f+ ."ϕ%ɡ@ Y`HH}|k Wy`0]T0ƉYØa)hVv^ϹX>A2T!q)F `@ BJtH/IMqg~xe6P GsS>Mޕ< N9#&\eXaS1D4 Ma֑':8 4ܭYs'ו-aʲiy`x' aurt6QnpTϱ[HֳmtmHHL4dm&)\n$E1nxԭ╤.:RqΘbX0 iحp(]:IU=bxؠkfP1lH}8(0H[8^⺓F K)#+S$@Sn |D>#n0M% &N컂e\-|ߤv.'NxzII"bPA-M"]bTMNOvf.hଃ tjb8 "ϔUgZJEBpG__տ;zNU^D K<+a..j` b܋SAa+D>vMV U"J,KF4Zh@7O ThIhKa(jHԅ"ZG.0j:±m[{r[aM'82oPEbKIXA{[mYn%LmTpY$$GDF`Jurk`!p=;zo{9gÛm%RwszújrY't?_aؕ){nqu?RFv18hOޛe һǞ.T]fonNvcmV{)/]t} dBϗK4@S%c^ GÎ[ FrA*jTmb7W>x,=5w ̣KҤ0]lPϳ|_R*ʰ Iw24h%Vt%|3o0R X%kMĢ02La|L/&4 ԰i=5A*8|'6D1 *ydtL(8G'wx~qAtdm{/C6)r"+6]g=yB h&hMuW5KeǙ#1*M_'|'V> &O_@[L h 0 ?YfHɚ^bzn B >cǒtHT:X:lC+"漎N2W'To<5Jw1p<@lYz:zvڬC}EՊp-\GZ+NYjk;7gI RHݍ?/b Nf)fzXW+}9 R&l=x;/Its^l&IS@VaP8Yr,U ien{Z̞9mlI^4#tt/czM&zIc Bu_!swY|>]1/l)^?ڬmql.ρ%.*C̰FsQ1}Ӎ(ؗ2Nlz\Yz,NQӅ,SWW9?TyE1v4gĄaYݎcԼQM_I\㓊Hߦ$r([ `ShD y8s<ƈKz):~1-qQ#<}rtiz^h P墪$Y4Ob|֞}FvF] ZV>.g`NB휥 8K=x`Bq~߼{k?r"|j,KLU ˳W2k퓥jS?= ;]R-H^G.*UQj/l DfM3(w;K+,r*{W"jZVR{9fZgʇ(p]P!̼֋},p$-,ףCy"(i@}ȆډmvD@]eS$-XMEؠtf\aq dHpA@zYDÈQD8C14 )aHJ;=IA@pL;~Zj;b*oȺ ʨx` 7$nL ut#7Y+ACpz.e#9`L 94SsTjloL6 5i039i /:QH9Fa}*uߝ_ gY4&-H- \[F(RaMLZ)i/6=]N]φpaE$}2(Rti,0M8`34Mcxwvkw29QYd|̵kL3LK0ۼKJ{zRn.)Pv/ jc)P1 B/6v4bF2_:'/Hk/|)C)]Y켦~e 2(k{)`%֦o8I\Chr't, 0= 1>v|GvfW)vQ.@,DS\zYMV`{?Jd/Xq/s7ye*)Q-{nV_?;QqwRS Օ(Q-^V8+,ްHPty+l:p/ԾJŌP:L}9QFRE*" B_94; T&Qk$i;hу 0 "`6`9 fVں$j=ysx3(!Q^NUkX!USc7qff U=~2&Bw8c!:Gqr~ IIM|  @gfԧ*梣DDG%OBu \aC֔01 ALHV R9$ ds&Ech;Q+h;ysx֠smY`^3ȵ礘G4-yn;'af˭|%wPZcj]; _o5d)F-MXX~z:MQs&%;s+.Q?g d& 1\0% 5 $RF#q<.Xi9k;`_{le q,OTi}b 5>O6v4skŷ;|L5~G`|&d~&ߌ\v-.P>M/}m72Zcr:$AEvbn}'0I웏3%șq9pyEJ8=Qw,Q;ʃz([ǟ;ʃV;JsKQ<(0O7nya15Λ/BQ3r `p2 Ü;,]\$6yFIY7Ԭ k<\)P,'ɜ7ߛC,J"5QQ#5sHӂ237Tn,wmHa`ER0H2 0XIDe+vrv-E%[obb|~O~q#4_h w. O$;_^wu5.,ҕ v!|JNɭK`4])!۟4S낼+F]3@ 3{dκ@i#PYR.` nhF ڍHX!&h0$CCDIp$$P=س@o][]K3bF68}^Qtp_7bSb(0Gaܟj| 4sH1_]L>ťyj(CƐPhS2B:78>j' ֚IlH[sҡZWO[-6͏ ZN!Ȧk<]l~0R xoET&D& Սu|M'Vja99C:D{ݡ5P.+b1~!1³23 r& l0be4OCbL'&(=PGSAPXvRb<(GPx+K+(*QӜjD,Z *&B 0JxK. iԱ`qfH\+XVDy-U 4;v羓 ޻ ye Lh1RFւRc%x;iBIU8E|ƽb¬Pj#Ldd!h$>Cp DߒHgރ.tX5A Ͼ$}%=/!pdþ$ՐuFcCĊG:"eb!>d&ǘ=tZGX6o[k/%ƛ?Q|3)kiHPm|>E-G iZai)KW2$ DRSa4Eq#-)+3It.rҽQ_ӕX]ӕWӎtZM&EJKy< er`Aip+筕PX-Tɴvk:3#9rrčRh+<0fxjeo > ăYe܁+q|;+ƳsMQR#8 7 ` T`" dNK/@/Q3__L` C/\QAҮy|P@PJ0 $8D7#->Wx#\@BIuBw% aYB^9|OM|t>@(q>h(6"Ji8&`(U&'6(zzKT Z->bVAŰ|8Gcp+nJ$YhrK6-G^wlϋEKnp ¬ M Wh0Ǽ>@p4IKbQIT->͜ڪxc }L0*34 -Q:@aq aeG(R`24o\[z sKv^DYڂkJCFrjQ 0 u'Jϼ#:QQ,eJRf)ptH* PF/5- %8--4 pT@@Ӕ% 85%0Њ'L@ u}|sn"7\I%"i e$ m:XV=ASyڰEH2BǠe,~َxav˭}#!2qtk=P6\TOw$\J6eEQР tX+L Q93s|k#D/VlES8ek8pٳK;\\ռH(&A 4j/-WVO_RBRoI9H%hyi$'m@dm;8.Ef[ijMt#RoC \(,Df_0*Ѷ;9- PЁsR!iaٲO2,䑬ZL#T0{)05:HHqv"\Ld%߾7<#ffTY4}g]d(Bwo8(.Cd{;C~ -'iP$Çi"oL,^1Lz޷ 4}_ -]GDKA]Vf UK}Rq/p׋˺`5M L޼/>; 9?ZZVݗR-ůGhؽ)t9[mF{2$7*)0 fmgCyZAyRIӤ:ݼY;z`xS{)׺9С{+)zQP ,[J-Z؈+Ƅ:z$uO%z/: #+JS\PuC ޘFfjqa]zG}|+֌)8zO3x'Ը?'=CXNiut(!ΟpgϞ _=.`䅨d F^z8D }Irq<_)aZ1|W -#{7*?yߑx{V}0YQ<PSQC! p̅ Zc)ruEpɰn_PE1>=*3*؞>.稹ܕʣ4?⛞Oч!e^}WMxO@p}eLEѤ,_R\R{wFH0kUU>亱%a~9]#A_rg 0$ﳡ% 6Yg%=!l-j4 > N/>>%dpU*Wh^>g>}/:bx ߧ(oD!&';}1<.f2fyH؝h%Kzv>No'm4,39;8/ƔpIdៗا8]#]j\{e/qX~^} ʣeziú 67];/4 7]6tc[05DN=%1uZ>).j8foT37v hCykeL!tJ.% =ro:LL>4.P\ܭJq%P?"̞0$ťh Y;lE=`$!#vpayO6ds8Ù/a 큎MD2LɌ&3 p7'U`=6Z.!km>$"pPÍ?4-oFBpݘ;k؄6Rj]&GL6Aծ6sgAW5lb֘yƭ4 YjtUVk(yU.k`'o+'?Nb'_X2VѴ恫'V]ʸJ>/u|ů/ *RqQ.+!z$4mHHBx'F^"[I\F]IՏ;AݝG*1%:EUsoU6`%ko>^-aY&H#ޅ>hƃ4)[ޞztєzOFXQFMM7{Ț&.;ʫFڒK]cV~QFPS\CLwƃۤ2;9ҿ[(s31NnL^^'G^ª*7H(;[#}[R8 n ^ĺ# n7$qW< ]2S _M{g7&A3\s 6,Faw77% 昦zVMGM3IPb@w4tR@5&xo/|v~YaZ뫆M~?μoYl:^q-u0JҐvߊQ*('U:Iz(daMIaa@ʣ?LƕEC_G[_>*V57C%]CFpzk"ØQ-9٨;6S$Ūfq{~X>mv0XꖻdQ[O_,Y~U5 uj^ Kh!83BƠNxRfj dC%>;4.3*S$SƙeCÐi B&UZc،HF"UM>-noq{% V7 jNT=.wWe>dFl$HEⴋHx&#z ċ[Cu{=JPߗ×*X[Ct{!LXLmq{{xn;HdMq{{n{x,aO#qRe=H:U;4w =@׊)EHUڿ9ͻh/Q)|ڹ˪ ?tgoA+%~rF2ޮAbõ'W8m]?뽼Kk'/4a8yw[]kEv>ƫlmsg%RR)RƎh ܹc46RJ* S)6ʔta%ґGikz <a?WO4^\%x8YGGǞg΁|Ng!P#D޻Tٛ2^"ăZ0*SavT ejUq{EL4*RbN"˅@93*#o3W^笠ݗ2&#EAm8i} 4u-6vMVCZ|jc)\_vJn>^>Ч;?ߏؔXήӶf\~g48~W!|u]z֩#&HX׽b jvJ_HyZR~qI.^-޼ӝ{}3jl"2y}Q9d$q%%ʉCk~.OCdjFLF5ܱ5V Z7u#}SElZgaGh5acAl#HCؔ\ y2Buk?"׶O=Fgmo-:L׾umF"zWܫT-2-[vA,C*/ }O(j|}ŽNk[wm-Im\﨩 "Б]5CCX+M4TR9{riUN]o ^7BHlDV8(z UdC DpwUNf]j*xZQbi7}}mٱmPW ?Թ.&+궗# 14 XI{u#"֛[ Z& \SHƱ. WSF(%Z֬%wU׋~kasB*"34$l H0 )`ql;6 ؼcGfBDoxnK`"e+2^9zm6H4a%dH6 GhSb`I\{<Ѕ #(Y]RXZRXwx6lҴ?e$[2ܿ+p`R6k8ػe w?`cрq@,gNR(2e"czt6%e=3 : 5謏SQd amoz=G`ZPӺ&hW"AC@ (7i$){%O3窺oJ{a K[[t<-tfϼ$iF6x ٔu(1\Q_ƏRP YM7G9cH#pŷ"PY߮*݃WSuu]8t4L2tdKa`մ#[ _m|m6>ֱ KI4 %V[CUc=7VKm\pN8ש8d$!KG(F%P(%(|L:. Dلƥ|H>:h E jRCH֏[_W0e$:x4a57Tr:$v3L 2|h(u6  AȀτKkvlv$Loh6@L]=twBFkAbF^d}7zϙ 6pQlI#JP8|F0uЖ|FJ>c[:SlyP;+Rd>m$ `<>mxFbp2"i[I# A<@0$ڜ|cPвizVbmb4@BVLwuMc MjE?u}/:]WޭXXyr nX<:o>1 #䗓;y;}־9w~wzֿ9t/9wams^U7y@M"X=W@_,EU[<~Q(ߴ]%8M!_J_qGo/gr~~{v!O[Ǿ?ouvW_D_ߝ_>b{=T9wO1v/E;ΑίnB%ۻ_.凕CyV97=OV~nOvh_pQ }akC  US`v؛g&.E od-V H.o P(2NF<3Qd9;Zf$'"lP)O)O]ⱁYTa0`M\'MM0AdKlUTľ{V}87Ч1IH)"6R'Kۺb5{#?zyIFNPL' \8ϯ1o%օAd\\&\ od݉Jq:]z՛mJ"\2*!eF$KF@q$$# CzK(,}lkR@qș}9 F!1)ƇB"{ު73иÔ%H>`Xc2 R5Q3a àI]d$ x5ZN"a"H2Y=:R$r@}R ]Ӊ}攴b7@A"i#H,"x5Hs\׽:WCEQHʘFCYkB=j nqNO C$BOʤ|"•-\I=τ$MX8  ݙsdh|>(:Gl)4DZD 6m 9GG֖h?jX%AeH6PaCDx*PJi-`7eq=X\^|?f-wǴ#e>8x"%XN0)}[Y?;sj\ TȿU\-q~_7R~r߼[.,z/'?) T Xmb A<^cъ9Um(g·Φ]ry! ,=T׻ DGL , L)#nv{-}(\ݑ([-WTA"T$=^MJY>7bML&%BוGMh>A:ҙIGwE1>c=Qw8ĚMpGkDtgc+/ROz0j O ?aHNPdeZ'&aBxwzcV^$2( A98CBMz]n[?קQBO`R"PRՎ<[ C:,1TNKkQi_TbPL\QiW+8GB3@34OQB3hr,lA:N4Bv;"P Pf8AYsCE4>%Xl]Иn NxdcNp!>yѓ^GS\p~Gn'xH ɢ IQ&SR^2=uf=KR^PrY J9{20Ψ qd` ("^`Jٱ u.}ԛW$Z7nW-]ئ p6\ŲUn2AspBX vnަ9t`iRGT>*CFSBYZLOng`%`[A ؔtO,z䱘c%V/})T &G$dT0i2!,csӒ,AS^ِC&5Y* fU\wUMk.Î!a Ɵ,3Ql䆙1?یa)LMW؇lJ㤼1$^LŨ׋0*"iڷk'l[Dw$FoNu{R^8t=]A0_f&ꈎHQ&fH$-%xgc+-1#tf,A?߫ijt,;G]3V!SJѼ_ً`|bۇ\ +Q1rZt{p^}Pe<^ kU!XuB 럇gףӀ97>@f!VgǐI:CO; +tk1 IǸD8K`\E/B}&s!DwYo/&ۏ\k^eÛ7]ogTl4QWoÛdC'1^v*Zf6ŧQ%{(&`7_+ݏ`)D=!L]SLƬd2%sEaEŏ'EŋZwSkһԷfν_ Eao"YWYЫ*MAK T)ʶp@aZs3*9&A 12U#Q"D4*:~{ fɬn<t@2q[S 5G.V*:fN[P5 Fuio Ϡʺ{‹?.BuBNBu*F+u%i}Iw),EAw\/~X;F#gWٕ5婜&{k˔r}EPz]{9*eȸMtX)>SqSiJxX~=6^dbA$?OruʙrK-A^'T㴍0~KkP8X[DsԆOA KggV6cG++Q L".IY^F[fLώ&^Ayu{ yR33}+/ 9MF%xY&gǘ~fN!Cǔ rx)Rq2^'uxmqxQ˸>])sSRss=THREG 5jav@vvM Epsqw҄#.wz6ް4 MɱU3O: txn0_strXe9:Vu"{ppL4E< YSʓ\iǨ}Nf1^L"g(ʧ%=V}= 0qS+,l&Rk)nᇋB_4N@IBeVN22ҔÂCQt/MJyA1e(Ps)I@c!ʬQfߝ?PRQ7@3AN@1125D!"T/[K1'81В?R%> }Vg"bYXdENҴ `()NuΒ,u/JH \)Ɣ&Lt"\'T{Ӗ#6\œ\FTKe+'Sm 4!ǔT5C&"&O`q#4%ɓZNO)oWIшJBt`^Թ1೩UC+9Тj#l?N ik%{>`Qm$O swxbm^e1F$L^&=NI}/Q܎~T'hN컓b'~s$OrvRxx>Ǭfxaw0ZvD%/9%נ*&DHRguјr:qΨ"jjZ;ඐk_%B*UpDŽ&2Ǹkßɧ̡ʼn,O2y=rS5}ˏ\lN jl~E;,OaX,OAňP#X<{˳h5婥4.ܙjy:_XyH%c1nY1s_aYyÎ 8F9VUЁc:r̤C"1wiĈ M<5&X4Zě\) R B')+ICs@JysLwR/Χ݇.\4QN7>d2AiƋ`IƏ\ʚ7 |ˁvAܑ6GzZco=CzzCC&4^+F^?rly#j ߴKy Z9 ְ#tsP5C:߰{ֻ f8CR(f^Х@5w‘ǢF*6yxqn[)m;3ixʑc*m:H9b=̠WTfg1z#'yKp3C/2ifRԨГZO'1IYHQf3m *c &67 ;F`z`nع14 `bPlc 1)źdeeIKu1Aw0;epOrb ͎G_Ղe 3ЋL.+ΐ1º]ȕn&cCȹr|~̄-BZNӬ@J0e4^e4ˍ_y]V;+\d Kギ ]s<7P]fZ)U0fV߰nL1Mma?A?咐K2N }HAXΌoӾӢvFjǁ\lHeL@xF,#.djBIUKr!0Q#?5X&ۓ;(bA GuP07F牥4W Y>djdV5le{KP1:~ROhe{Bڽ~06rȘIil~郊3_PTG"G"Gue0r`XvsH+SݳYuB}ێn2$?'aH>R x\AxZubg{ع,nN`5ǡ:j5r@}OdM⪛Ïu )'.@1MFueTֿضcR1yWpm3+OJ .u~O_ Qk_0ը MǠ[1@~p4Vإbu]7w;>F [jj?eIv]n-VqՍ\__&ti̹&еbBU7,po(ڼ:+|-v2$Tt.;M`5 tn .fnōRPlomPLk~E}Wrdo'oV"T+ڋ ƒ^4m.^ݞ-ͳYJ)U/0#eV`l'jP6-[#օ->qƬJ=:l@o{f;4̚))w =q]Bou{wiro|8@qa7:.Kw]|atdT$K)(砷°*4J2 ӴHhyHFIeB͛+?'3VX1;P%DJΊ_p@q)$؍4mȾa%"3"6g"3q!fvBjz*{C4ֻ{+Fy{F݄SO>L:0tZ`Lգ|\nLlǔs^]C#L{ef<.c1FSl̮@֨6dz xɘBPli -ȔWR}ՠX$FTb[(\-^2L oF 1Vh^GtDHD7h%?blբ7J*߲)U|BNPPscΡh|ip jFu4;] ]{q͔DK5N 2*v搸U(Ќ"XfK}: J E_Fyؘgb[W@u@87HU͠4[ֹYBt`sxm#U5rM"X8HګTX8Ja:|4H0<ڙ~cm-0PCN/7^+G6GP_fCLYAfdtdɶ4T"Xj5l'[Fu;pPG~j^ X[kQz:>6Ь*+ d% 00 W^# 6JzAk_I{YG[MzNx1`xbGZ7nb͢u`g ^b>O{>V:fONLh) k䌾U_XE뀮`%Ţ'>8tspT( =#c=?ZIzPNAZ{8'sr/ I9igqepr=jތRQN BIkQj{U7?4j֚KB /1aSC$-e]J1q`a0#JxBP!.吽-58~el7`)5v 2+B%-̜ESSm :@ (e! AkJHQ $[0{ 1z-*|ɢ ?Cm,z< A#V%PGC "Fb 0eX07Ƽ\~nȚ}0Alx4174Kl b[k03MګBDŽO&mPDPx42Ж1 ^{ Nj ],f+/ʽ@R/T`2:&WJh Iç*^@;GFD@WcCIA %`"5A[K q0eN8XntcXM9jPJ;\V2o !el/F$Il$5tbnjgnWg1dl'L)SCPz8u;%?02uYTbRثB/|-XAYp aLdpbP7跰LL&Wz*k|M-0kvjCKި N *|,[u~{! +9(+Kh]M]u$e%`T2U8dDO\EafF R\d 4k9\ajRr>Vh,/20=Na@YOϐ+?nu_L1 zΰGn: 3[L9=:33:33:33:33:33:33:33:33:33:33:33:33:33:33:33:33:33:33:33:33:33:33:3t:o}x{Htstnknw7nx GgHvu/]\? .p|ةcnԿ~ُ7޸ܼ9z}ʷv66@+KFWj%`!J!q-`P?oYB)ۯ~._ջ7o2I~9۳}_0]ɐC +[Z8pPiGTŵ;,VMW֙$q-#zz'` F8 bxhl?m^{ӏ>n8׹zsko7GW-ěY7{<\?cuXfS}q{Z͇/}~koTJ'M3ۊ-\[K&}pk;MZf}" ޢ7JX9XYFCaBi~w7?nb JO <\Y>Q d?pADyO9۔;g^_3:>%Dk|Iv]?m6o~? _Ϗ. ~{ v|o]@rouWK>FnH->ɶ l.VuuJ fG&3F2|ܯt+w+HBڗFR6E#%ϥvAdN!?xXk=:J ֧du,4_ X!CѮ'|z-eZ CW6-T,dk+aSg0\h2ќ׭=7`>V5ndty^ՠGۤ7*HQC'8=G͇Ruf613I~ dkლά$>}%w<sIAmޞl?^]oh܌omvÕn*90sz~|Rӿx,d3Ƒ3!r{ yK=KlۯV:r5?NⓛjwZberɮJ ^+)[ 6RlW6Q$OOzL0Xm 턻=),a+lXdx-5+L"kYu9L0=ìlba6OeVֲ7~4;X5~-ed NOu++ŕY AtzrJ&g=s;uƑixa%` V!B >]x^7$7*e o[H߀dO?~W^c7gLM:YoۧC|; JAtJz'Њ+ylr6'F#V[s1MswOԬ;:0^ĵlNf-l%^K$ѷ>(X#Xj+x;Xt`Yxefwr6=*VV{="z%`gAѻcj{8_! QwW 0C6Mr5",H%1!EɖGٕyiDZ(rȩzf?S%‘+WFÍU׽3%kGb.^Íe9{,ƍ_ÍuNp,X 8  Tyd0gI!Ýz1_~]\\l]3~pe pps VBiU+{ ߮6Ucy S(;~!FR!r=9y, VߓV髓ݭN=^+xI5n_\J(;rVr=|¨j|uIv'Yrٮu;>5Mcg?۽ϋ n'2#r@3۷R}tOM̃ħu}v?x絳sDzܳ/pۙ58h޿}ȪwYlan .$wV>>0f82y^ݶs3C3S>1~$~!9QQPJK_diWun-PEQUrZ)fN[!iX쓒72 <1IXohʘjiyn}1eVŲMI _0:g\ֶbהMrR^\!M.ʣ<9ܺ^Up>SR942U]8)DƔ]M}0wjqO4I|4&BiUF S q &51ڠȒΪF^]^,AFfG#2^D+B>HDh"UREl*EQD\I Bf)|!sv&s%NӚ" əpDِ`HQ LxGФ̳G^omQ%f2HE,"Ë$5fbżK) (&D 7{t(cɶT u*<6 Lxb8gge~W7WݏHr{Wa]!؛tpt(%exsoeQlMg9FM(MeP:FEI& TyCT 7(ijkI[Լ'F]8:M]3h8m͙aR+ABs$CKd тcguŹ(YbFD;T4jR1`$لw/jt[! 7ԂYpPsl jRx=5eHB;^\)ZO{^=j, 5*u-kK1um  Ty*N 1W=jt|M ]Eï*8J9 '"Vh?PAtjEqM`b}Jc (Ji߬aؼVȭ+ pV5BM +Eevg:cN?z;_b=|wcƏn˳~> W=j#2HK"pt|w#Vn|2$Rx1::68; HpL~T!g8TЪ v)`ŵg l|wWdZ1pfCJ9+THyR;%xŎ^q*Mz L+/ǚΥI)bM!Gh ֬!5YenU9_?HҸK)Oj, LW*Y.,j^f.ZY^G+T,ً 7VyBNQQVxmYJŜDZ6@N{\p*qcCғA5k^ p$YS ([(裷*dA"f^*"+=-0H T 7TEB KE 5PD Bqc ӤX zڡh>" g7PTG!oġlU7V 2E#C݈")gfd(dTS*^PT(h0LVmС&p`P7J$֫MScjͤn`TZ11j fKT Z GuumzԕvrBĭ"1a*p i1"y5q+MFuf0?ːqTqi3s$-fkEdW0"d[ :Db2טORJ@,팆'TW;2dүEx!؆ CM%"Iz *Nxd 3) 5G $x pT O -X-xVB ȦŰJ{$.Z43-+ ܶIO`.f0s@V@ƭ8ѧX1H V*2Ó`9FsaF@B]eԪ.(QՄ *Vp,/̷r08B*9x|?tżKb{ҭV'rE$'U`d!6o~P0@&$Ud h2G9|)Y$\X'J쑎(bD( P  u11L`ߤd"vZM"P6H\ 0\.3 :P\lR&kՄC }oJ3+ƍ䥜Q1՚@>β2eH K45bAl-Ѥߵf8xp-s2 P P毉P >#I[FrV(J&pd7 S@585*%J-ڙp2"0nI`&$JN@᯲Js ckpf"{[A-G ,#Jr&|;VQ%ZE5Xډ2 g¯R'y7%Xٴe*0m 53*{f@4r3KCX,p5(P eR&կHTi &:YQ8&qnEa2 Dfy/NkRFgӈ hjR+^ pFArՠBNҢ}JU YW (%ypU‚@5uJKW{ lRwAx=-R+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+u^K'Ga<(u`tDJߡRGJqHЮJJJJJJJJJJJJJJJJJJJJJJJWSH:lGs?_|OSW:J+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR+uR矧yɭ^%okw0jv_?-Lf0q4j36W~}h0lYwX}]r1BzA^@v&ǎ{We>_n/ۍ m9AJ/T(LN6*%H5R1:VEiTRh`ִLJK#ѝNs k {v-Տ'+~>n^jXnRg.6o_c~NfZp~dz!To?ŀq1Ö1&^< wǿ`RIŦߏei 1VX4H5+±A~_uiw`cAWfW> W~OdWa"ɐ;4GbX"/upkH*hXWH%#1Vuk3!R}3+3ׇ׭0/>"AGol눴;@%@ZY:45::ߡaB[7t!YWp^=QゖGp5̕h55o]]o7+OH"yI^@0]1L,v чʌs]VM8rwy?e.)Q</c!95뱐I69lV֐ v)`XER:!Bz p6X^XQdڬ>>W{^Xmw`N-edZOz!`1LV'Hkς "쏽fq1-,:,*ZX9!, B7g6z`V{T?k{UWln#K IMoĠzz"u>7_V? o~}SXoOպo/diluV닞wۍ!^}'3:9;K/7vRl6 |w XwM4j!` e%FZXZvutp9W{ @- GgcP֟7 }ZXipd}8X- (ʗbҞ&؅֋K1.l~96o{`,`(x!`MjRVBh /8G1ABJmuu`U[(Qgݶz"Fރݞֽ׍oWWӺCտ!^!"ut9ɳKG<;yh|Ƿ؞E(5tVɭvGpM{*_:axW /\-6rQƟ)}fQ߯zַٟ7x/AW#9no}wzuo_ݿݏ7o~޿{vDz׭A7߾?ޭv~y%ʠ)ۓV?׋XɻۻIS3lhO|~u;۴)o7Smgݿ:ml_5+<ޠۇlwŽe7㫏zt?uvozے6L|xn Z@7xU:[onW0Auo9zT.7&ns>@!W7ȥ[κ};{=yOYiN03uMy7C7{?laf/HP~z`>ޟ1,~2 ]goǟ#?f|b2Iuƃ>3u( II?A*c-RϞBbKZ"ii:ɜhV?k;G֛hHV=j*n]ϪvRIe.Lju@w?uǀYݮ.oj9^9+[Ff w}.Jʻn%M c2+LdCPZ%vS:7[Q5+kvV-3 4ȟ{Nk2TK9N1[ 9T1695yU&IY yiPQ"E% cFϪ[we6kST$5*SB饰+ҩ$[`cfWQLT[ Z *PO؈U{_:(Q5ZV5MTGIQ*XHL GQU=hԟlqҫ|WM qPr׆שVײl )yS|'F iSۼBU$Vy^  AF9'-;Wf⤫ KڱH4 v)ʆ,yHlCٶ[0J2 [hgL@xbG$,#gڒcm+n3ޱRރ.bbB͘R.gz:*gXqY{ylB"H=oL6ZMu&A\R*-g]d^SFyJ䕂TcHAS0|圌:#Ka:9ԁOK< A‚lnUc!` I!n;ew9U#7f K}K*OZmVgmA ʤcjSH!"bSP>זrbƩ0cT{} l`o=ޚJe;qp&ʩ;5QqZx^l3a ħfm׍Xi#s/=uRDl 0ğ @@;`fY ; %HG!9:BjVDgt梫l*ͨ @i̬Sc `5TSu3FZP@utfVm ]A;g ])SXo)1HX dTU|ӁvUO%Ve`Zzj"m[ψx#x>W,"?f'jpHf(ULQ2 WBLNs AW=!@g I0YYr@ XvUj9uAкw!殛.$M%YJQaV(0LYʛ rY̖8y;yYGTU= V4ЃCJSCm Ao Q“k5[C%D äJBI@ TPdNq(fz6_Ѽ0Xj!eֆkDNEͦu65Kġ.4j"y6k0bw|\tuHɔ" Ou{ʾUɤaf8Xpk͖캃ӡDP?|IŅl{AboLVMRH+ %f8Ԍ9FD55*l/5r45CUˮŮYDFMq=nJV1UV,0fτlܺ1wo!* nVB1]v RwJ0@B5@hekAm: %Xټe*0ۊA(gL ̀tKwk[բ4(P 3"BM'_+ VȲ^WwLlޢp ҔOs=2@B32ˣ,κrVEhg!<)D !퐂phz)1%OWzh=JD76ӑCCbRRT(,XL5OWRZ^M}،oo7aBߙtVә'<%سb%"?lѽ߰tS48nm^rZ_d_oJ 6u[}+6n2S{uyw_ ?/-'?q>Ɠ۲OW.J67? ~6Lacƫn|қϹ ?1mj-T/qSޞ|x+pyfl\='~Iz(X-GZGبC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC/KC sY pZ h}z( zz`C5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PC5PCCtǿ=|΋C{2CPV!ǡzzzzzzzzzzzzzzzzzzzzzzzc>=r7Vq+2h%8i1_^cl7{$/ܚ3~5kpsFҊ;n%Q%Pa$6EvWuUw"/O'x3bh#ڂuȕ 6|Y2 Di yL\0٪61oXGp 4@Z`ԑ%7`b+jc=^v)լ#0`M:bA hG;k`%;ŠwJ0F1`1A*`FuGpN_*6Up)10{ZN bzI8X o`%Kw9`/ Hp5h'vqmBk Ԃ~:7LFvͦA2pё:|[O+m'Р >>o"r!& #W, ,'\\FX `1q!ϸ̃PHOG#e^d0plYLKžtFm-_byh>0fӑq@ϣMg:EKLhjuHFΧz +s*ww#Byt;b:Aݞ~!O' KQ+0ʻj@{Y67D4Ҳ#" P")J`a{!:+!r"ݕU gn@*Kn) ;IV1_;ó>ەVP.)`Uvee-p("$!yٸ>`mkrWvF_Lm@|QMH0voo[7ʡq:w[Oyro{'aYf6NYE[ `9m~4[~n7ǟN;$qÅAMM 7?nvC!A+E b/)>՛MO#\*~J-)9TRptr:$L |v!CB Ed?Hj$'myW!A]sEsA.þ} .ٚ!<_l_Y|:].7|Pv6kE&?O!x~|&/ԓЇ-|厀8 ?)?\w! 2;|2l)?g5aR^ə>Єvi`ϳ(W]}pupbH:ҘBӒe[V+먕iMl\Ǡ ӻm`u 5`YH@8;C5PQ Ԫ]Vay^ٗ] oapW3} -)WXw 'iclU^Ov׻n޽~\(%w|Hl-1.wCB˵ jSbs_GA~Q[یu8`q`-=0SL4xcIig Jm%1:H69.ߘm/H% ڦٳ$ݬ`զ3m/;Zd5˱SL,= 7Sr wFjlFn6>!\DM07tBn@ f㭣cƱ|1Ѿ:CGIPl˂]3/ܥPjbI5 \xr0UMm'jb k;io Am3 YcD N,3dÔ`Ө! X3M(/FME=l! FIM(j[M;:&8_SJjfv(B:ŗDzpR̘g1ό3c̘>3-)]M@8>uoC+-H_YW %t,r9rTzÊ9|*%0m)_!%e] v`[ V}_[G;~5p;Vw9%{yfmFF0 v]j6Eu ZZO3bX@EPhP2EY5pjeFP+Hwv.k1G)J+=&OdR+%9,-vu# P U?uIw[ 4)k(8ڴ 9Bgv#6o/"VSSyv J;LHx*<+ P`Qk.İdhBQ$sCh? \t1Ÿ+`P.G˹Y}@f@,gaϺ_)´+5GK/u8KSqW־ɐLUn7E?*}{)f. Ld2So1\/_Ƨp>]F|~X,s!}yҒ0ݐ,]. W e~Gz {Ό/ڏ6uCR_9 Q 4c3F* ٙh22$nU;=q{5rѼ:%tqь~e4@*sU MKMHbLb.-wVlS\)$9+Qqڕ1u` +]q[ͤS7eO*ݥC3崨޺@*R,MKCB㙱Zj={n6MtL;{գb}|Rs`]y-lЄQ&Wa2 rA%O؏ccmM3Q{Z4h84uZb-ez殊Jkk)*-W(;d4t )U0rw(Р48k-s7T=٪df83lo%*ep@( *I=ڕHIxeA߭EQQ'8,QzNkG4$)L>KzT-q 5cn!Ǿ5@ITƎ9^:e2A.hة IJ5?p681fwj۵6=lfŅf @y0ֲ%֮T^ ʛ{xW7E#[-mGÇ9|8ûA+T,ܾEX+[L>_#﫱βͿm}uZ1|[EL_C8|]B"{uhbi )"/U ]i^ uv_FcFSt+.C7T>EWA;0Q>xg(Zbu$ws̑s\e"zX#oC5968Hu9\!cӥxsʵewա?g.,񢥒Y9UZbT)J\N}r!8TStEs&n |U]U"ǎ9:CK\:L+2m#tJInי&"<iV@Ds\)$Ҧ,T%bט(Ղ@B*K> i? ֛irkb2Luƶ$yL$yͦs[gQFko4!({I؍dfxmKG> sI.xW=*)=aPG KOIݓB.lw,E crwy.˸@**XJ] [k6 ޕC`V>VWT`/5uU>Ӫ\cLsWђp:|Yb-`a#8栒.+GEm5ٸ}eBik49U%=PVIm}"ٖTNj5Ch65S3TAs&~ฏ鬕;N>U+=nIwaΛs v-Y6HZ+TNvHNWӍ:]?jRݚq )D  )m"!q!}6p./>ًxiũJnH3Dkx>y^|ssGZ4sfT3J(Yk7߯XDU\%{X(ur P4 E`ZEnDC#]"a ?=\CIC/g D5o qA Mw^.jol0U >}XAٲo ?_ TI ft/ى&Deio~+YǥIiUf_nm#J W7׬IWc>.xܹ{#= WmVk<8OM M)x~9] шKbg۽dRX VYlͪ2f] ?HC$b$nbh nKSef+^"~ZsҞ(W?/jc$DU"Ie Dj%JIn$}B5 ՗GjQiZV#qϫxl:Iw ҅覫®"}8y+_"3(1SF+@3]f]ALsgњ͞E"<`{=;AbA㋇T2GS4u/om5RC^͹ĮuFZ~ꕠ^ ⵃz]Pt.F"Tq%)B5J^ h4I@2"+1&$t5kIDht$9@I\ZNM-g:Dg/ys{AApK=Mo^kKýMimձ\[]lWl0 j\;3j}UR^V>̧^6behi-xH]t23s1wVg;5+:wOdھd\duk^=0,ݟ6F=& SβƝn=S>0ar Wc,ipbozζ&ߜGA}ׇ 1}I Ag):aQH%Pquc32Ĺ:RVӴYD!9 ,Ef/+cp>pZ>|ʐo<ri:2v/2Ϭb) F -&lb mZw}}{ 4ّQ3W/_R9_b7R cֈy)^ L֗yW?,>ivyqS/Ԓuζ$+mC&YEH,fV85ڐ{Odƌ>F A"UoYXN&bs Di֔(#'buHHZl=A_kچ'5 ~zj޺\Z&F>1xgb{e\xO P>N,&~c/#rU)T RBr`a q"9G8i$wmf#j@fEH}bB&y$ H "iȊmeJVQEmݻ̷ʽlDMn$o\8lԧ@"HY'x%a!O`>toy>\_Uo嘩|4JJahY Mc}ՀQ mG=TX[t}?o~\o,5TkS0W9v:݇>6COw1gGߋ*/;;`~_]/>b7/~9ЗվS-C9߿eDk@I}BI#2wg8< ÁOdy'[n/Bd і{۸Oޓ\P'wpH!1|"m';UiP,zh@AR9`bv]+(cwu1h/#4|L̒ޗiⓉēzIq#v|T|wx&bV2 PIءƁYrpH!)|"IAɜE7r o삢 *6k9)C7b{F+"&oOm8 z?gp>sJLs9N$>I˃fw4-8cHLcăY&I\/s+MrEO JMj761Itp 7Z+#MFn8JZAR - z.q8(D픨x#r9@ w|ƺ\ .zFVxy[MGCr L;J F3"<$k1o5iǀO=yMr^5a#<#:T@8YTvUIo[w||\B}_#Y9(M4KMVF3(^Ю_C_{EƤr(A W6'ZfIo+R>ճ&rcЦzqѿb*s zamw [˗B'Pc41H-{~(>Q~s7'k/o|$6R`[xiΩŤ@ n#c(IJ+%'XP56@7ڤL=^"Zd-"paeɧ4iz0Kg(]'Sw2(a1s<$a|=㩅0tzz7p%)vY)PǓSѓ~'CO<3@JL*NFlcP3(5 Pfl >jӲ4>:üRd$3N_R8k8G?эAO%5%Am xSW&lcr*D˔F"QIg"OOctdgPzl"!7"$ٹdP$fhGRWG݃yEEuUtu2 B M?Ar(e02D2@-#MQK^Ѻl9&-֓#Ζ# t9 |*I^ >KM|)"de;҃v; |*|$GC{#80Wjt7_6Nww3)pR5k⌧/A"8_IJdbM~smP*_qF]oX{0ʅ-8 $|QtCԛ[cK$J8~5J;9oϱQs|I41.NI (ԇdBENۈ.9-, f#ptQH&]l_Gf#xKɑ%Yd` pt |*甝cC0F,yXNLCNсԒ-2epqL ad`$ٜV֫PquP13{Tc nX2U7aExQK'¼4*˜q]_ |"iR1@"[ qy!Ϙ/U3j\*y<[OBb_* ۽Ux]{/Nc-5ogE}V#bsV|9\1}|0A|sTǔkZȸ g\׏ <{]kUYنGvTVU3LU?uZ'2I4GJkpW)m5QIrA .?~?T9m$xAYfN+Œj|1P|6TL!)|*Ig,|Rqs~#}&{"q-`>?:8Kͪv%NSh$G< ⁋*_)J]nFf.r-vsT\CDF8a\QBBqkx*L\$0T!R`CEC^[𩴵͇17.(%gz3PO [BKaKUR\W^CYglQ2KYo=WB{@b@%0Aor>w@{RUEVKK_qg<=ei.֊㖹r#~{7"Qz;go<Fwpl knq^)U τD!83 QS:ch3% i X%գVzO򼹢|;wVPXCd+1(:ǁO% P F- R# Xb{eǪ@ Z(D5dU~F^%*so=nr&|ň'LGPW5ݲa%o˔%CM6͏7=UA'nw`7P_¿6sjj HOBZ>5 f! |X0V#"SXf\8E=p`*=H9nd |*%_WZ_U@ngLn趵2e֑X5 2N#]nJ]\2|cwcJbs u3^Q?$@k K*JZUD" M2JFvPٟͲAOeIN[1=h+NBWPZOJ VJb+L.QʹwW\]P\p~D TG]EUq HhrK; \L.,} Rwڞ\~D64F;b=MPUaI"\DVԨMaiw/c 2 zvD|L F4_x{ōur% w#hʞKBpE  2[ar$b%A cQ P*řjT8+Vwכ8ݘ)1IIM/> Ґq6ĕ&C6IFO%\"mwFwܖQũ֧%mf\i iV03 |*,K{>(gsTrW''9k D,AY3@"Z Okц#ptQI.TePñ 1Caxm%%W:٬qG 0&)~X6˵.24I=!WO姒g?jxö<6=v_o\ř>Y 8-;C#ZpD\(28tNrF !ޯĤw+Q!MV)>V86$Q#ySI>`̀ϋS4J [pD\<O׈,ȃ?Lz|(?z)Limh2ct)I&H rw$f4ގyM%1&H5\V~(BZ0y]P3uvN..~b4ˑ` KkBRDX Z~*YxnE:8 %5 9t? OEbxh2dӈcޥIqH GAYT-fS'Vw慭d ϔ*rL:gMK{±=7`͠姒z #d_KfG|c&Itwgn`iZA16߅\\1T"zL|b!т#4e'^aWųEf\!|V@o$VDĠfO v!On9/lp0͜$H (TZv*U 0Q ~Y~"I vlj55WFzes.zbs8g\uN{ 'B1"ska`zUKPJBXe¥ 6 plHF姑\ZV[d{zk2;*g OseZ܎J5ظW9KC=-ȹ) _^_VOJ<^}6OPMDIqj,ʯG? WºV_nYMK3Erx)4Wgۖep_[g ϯ'n|)wY>O@@ J?l6)?d&ؙŲ݃Twc8Rb8.oGөz^^;n!lڰSwVn7KGooz϶+`*Ƨ⏊'pWP0õoFUk8ɲHHSKB&ܗZS)0h.c3S ̈>f޾);ZNMa^uIV}WX8]W? xB? lm,#YF_<:W;Sn\*8Q{ \]Yv߷6zVqq2nt Uqke>~OɚY%l(Ǜq{TDc}]C4"B]*٢Zg`BpjQWȫ$w~ tJEص}* 13G HpR\23UF?w몥c\~b{OzH nr^&T^a֒%?Kxٲd|}:9fOR%VazCp-(P߂#ryqgt <[pDl cYj8wI j=Wwx_8ZC7?0 T*)ļ*ɹ7xbjFzA^`ݾAދ;ޑӔ?> Y'F2ρ&sRB2as0`c1Fc#QgKsWewH$GiJyʩ(Ԃc[~^~D}]s347I j_[R8]C.-?Igju%nh4JcyQp1M"À0kwHx jт#ح%{[Vc!fOHH`iwra3hMj嘂-S cZM qP87r .*JZ1\cDQ\'q*FQ_LhGx _O&|9*#ODgDMw#iR':Mg6I7wh$%=wV|G>Wz0􂪆_ U v}*"*EDRDcЩh;2_9b ,+DܸJAp zݪշjS =C07h^ Iѣ_Neq O"hU3_2\T<`Hk*ErhN: 1)eI[#FZ~"e/$Jl(Adw^P&Iv*י@IP 1(MmBx0L{BY1hd=2Z)`HFyKc!C)%9! X rTa\BѣJw _qka榼&ܕ[8BAtΔ3+ c=tc:9߂";ηçZ8,!gQ )z0]D} !vF\JΠ_YءkhQPr_"9CZkRθΪǶkHzא'6s$"͔R(2H,%Wc`wySIN.& hNjxxm0X 9 3l{<x Z~*p(k'wHy[pDzV_]LMm@J}ZоDfy.sxA jDR‡!BW7K[PE.NQ6(y>u8*1+TӐ< ?)?j|Rsj]BELY "15$E1.Is gnB.{4J]>\^dQd$'+2S@dnh"{RR"Mrd(+`5,ƮØJWm $r9l1鞘Wݹ`HPrR-z[C{h8ٰ\2o)%%`4tKbeU%k̓ix S1lzF6Ē _͙'|k|*̋:ƱK{:q%rT^Yh)42%@”j_Ji#x O%*<7re*UZ)pP;#·KA{RƧJFt p=EBCF~6Q'[}5~ + !} c%OGcG *MtΑKo%8JaȁSgYdPLK@S h.DI6^u1!|^MG8FoେX[سRYl$M:HXoo|H6˙dC!x {b=Axid=HRG+8+>h5g!Pd o.>m)AߊSM636bd]w~g2Z¡#A`RPK`8.E9/\NW-;> r@6ò<fd8e؇(QF$Z2`Sr蔼ޗR]Bu軇;Իpor ScGPzcGIi.PIjXN )vF`L_t886QOmj@8z$ [;7nO!A8Pf]wb kzQ%%L^֝ 3s{9f2 |Zw Jubd⦦gիY"Zta$Q`$>!µprYjj:W.З>.|QSj fͬZm7Yg*)*QUh{^`í7PrJצ`RG4+g 83.8q$ C %$!o9Dz&(]aqnME-A'o yR*t=1$@t͘J|aF8Eţ~/RL=.*B!5),):[QkP05 +o>iG<)d=!`$VI|RX#AJ1 ϖ| **Jȅ16Y_G1JZ 0wYqK/c},"1li(Mk5Nߪ(梚S +dD:T 0@CG(m?Nk0H>i 0*Z"lr#2G_HUV$ 6&`AӄuXwG-!+R&" Iw`BtʻdOC 2CPNVI*u4qDr,D͸!v ǔ t<=tu6s"IEkiYuu 1Q_ 7~|Xla A)6ڍ"}F\z4y iarC` 8R(GXЛT !U 6F^ `#:bWQ0j̆ 95† .W2>`e{0;Л-Cٱ(z~D kE8"ڕj՞;|间<5&oȶC)oc *$}CKӥH*S·cC5P,@P|Kz6E!.)b2gl6= SLE^f<(rFveL>Q#ʮÇSNsQ)(3|#Ƿ1ƍl]iJB>5=t=|[1E_?CѤYpĝg*fƱ({ij8 h(@Q06F()TWV)d]H&mKzh_ڔJe <.#u64TG[7"X9)fiߩeЛ{o'!P}(.˧.i-rMKgVT%҂?9Y}H _eAaٕqwMϷ8ߣPu8xd 2282x"ZPCx"Awt6m5 6"C|R3Z?T="η}YFIخSS5e]Tͬ7 |M}V*M%1؈\EI;}?[AoTA%2vC%W :mz8tNǷ12,e?Zkoqt|GĪEAiHn?GӒoc*RqU6Sp>T)5RJ0VC>}{DKzoh6Ơ('V{{BAz=xN59zn]=R(ʓ2sʓ凲7&7&!Imyf+&)"ƍG9I3Hit}6 ֐>yqZԣyK|G(}/iҶ߈lԘ"˼Y(姞2cd#j\wN&e%t5dUo:ȸyE'Hy۷:>$# R]Yq3+ԿnJ0S+<[(?,+\]nGW,Ev)˙q'>̗6q!n>n\&7j[- U/\5l{bB([4-Jt;ѩ[zEUgqC&6=MiU=uDS9hchRB2uemC̯Ū=WCW=rp%B2YYLjȜi ˂I*S·.z&zӼf*-MG-?zޠc옅=wO֛r7nTR\1!:Iٍ QȄw<^bslAsQ))t+ОgV.:jVQk!QfEq>:J5ZdˋXT8,̛[Co|1|/ mce]S} !gv\A2)j']O݁gFQS1 Fht0?bhˣUySE #5;5I~*f<35x)#=THeQiS/'b4HڜAߜ 4oxP(΁şPV*|B7M`Vշ60iN*l$DE2>!.2ZK]VA%ؿ9o)C-Ժ,&1z67#K& F >CHQXB`/1! !RaK]|Hljɜ34A/O,H{ )*Hm!H /ҁ)CW3|PTjsE.ـNHEfʞHXg􌆬> 3T4AOeu2nL*RqDΪBQ Q0"3_W8:>kBWwtwǢ3-|yi(/Nz%S{GMQ\3s+)pmyWV&liUj./ц?v~󼠝Mp [ԇ`y_?1x~~4Mw~lGVvt_~^^?-/&n?|߭N8^aGLۓ|'wU95;U)RI'[& xLC~yh#ɷ--wY;k|_4u<xfQf4--A'm#ܴy4v7 xƊe#q҃ye$Ѷ̒Bw|wﰔOu!)Q[%)H&HQ1Lpϝ'wLHe{d~eLɌ|[aozV[$\v-ާtgAi7?RO-m-l]~XB~onV$Am<|jw#Y}Y}]9ȟS7 Ŀne8n@ u n]n{4Ab F䏚%?xe+.e8imT&&.,%(LiA,G Sz]Cݞφ΍_A8{{Y zA T^z oC7@O~E*wafal=f?L85({0ÚOcx 0v4YA(Q;1tUޱZՍF?Z]hiDq`lE#90hk8IΦ+C%,mE"]Xqd9y*{"\s%s MHcq6n7]Z;x3.n7KF>K|8i_DT|jS%rя%I#QTYAl s~_,)[U6m K%eF(U1Υqi ]t7KML|KSJ"q@3=V$~R>vYNɻK-eͼ%; މhNHw !j=SڰM`HD3$HB9ShY|zm &WcǨ K˙,6yA$b܎li_QNm w&AvaQ fF(ldgW jL(:㨥@NYbt2j%m?|9)yԆQ2r<[Sh9ӯӻ1ֳ'cP>h<~ T4o#`97ZD+D`>G 4XaZ@iZp J9,Z=9'zMENʵYZ%- wXft_iMU3lԿ*Ss^ ]k":ilCxX*,ȑakAwwݷ% XNv=w}S֨c/DxyGp)ވh# Db:]7="8H dDպ) K6*̴˭y||]8^6N/o8-PSqzHΆ v9HBܬE!E}J* yz%17`zt*J-]/@M+ SQ'N89'$kn4Z?-y+S|O 5ES=w$QxZ̢:wp :3n߇:b)&inG t{7EuճvR] k&ɗ\"r˾l"={Tc}ٱ\]ݺC'fk,:11yn;vD".̢㾵±FQ_\|-焢j;!+CVq*Nx>C l4-?;K]*kիդ8ֵJ  ̊= LAbT!=7h 8iO" Ny\=)q]˶l'*&$)< Y :2'EFW%H$i7RpQVh\-9J*&d6G,>OQ%-Z`J p&kѨɀ1fqh% NڢCr 4" F]%;̕ H Q c g[dq EW/wu{jU/2ȸ/L׮g LWLl.1 >*=$P)oxB/ [' -pP dI h+gXCZ_&_nRЫmdtREf_E#lt)6##Npq =v|V^j^PdEyA*"+`/>A<1`NHя36+W<% P~ٓS\h IQ LJ9 /r=OD&YC|ČMG+9//@/ ['^kۘ'8p%)BWn4Fqc._%Ci<͇~v|毢F78fF})%%Ys`p9Q.^p;t>؉J#|[hmjqf S5ה=$4h&,Hդݓ77ñ]vnůC؂"L5x8amj*V+x28Mrө0^AՁ.oYǠQ7ɟo>{Ze~j]x8n3uՆh/jG;5.R,C*^i$,}6g!A`*#OKrZo?Ni«,|x2d@v7bbdyc~()&Ki<၃/Ʀ2SކM`i!NHFU˔ˊE!&rD JܮNvCp WqQsϒ0mNEw$L=}-;w\FXpV~$ g9K ,v du&_L9V+>iv)Q<:BFQN+)r*؂@ єsF֏չVo9_뜯u0_k;omٟmom*sCHV$6dЁՄu>Mn[hrt8`_xaD<%9pn_ꑔ̱N;(>a=d ix;ϒx("y\Nc& }-rL?"?+ޮq/y "(.ui'"(iFH) %4 ?4FpnЊs+x޸s~4)<={ F:#S=)>_}:-}uCZ+ ?FS`dr;/B>R7o10(sApY!` `_9JD&{s/cϕ/je|[W9 ~zwGԨBu}.W&Z$( B,a#x4!EK1%Ti%7؂OOﰥӜ)|ީmyٽ9JKx9#Ź̖{ʡQqBxLMRDBd^`Bk*yAZ !@O$p ^QÙ kQ]>ϽF Ϯagf~>^hr?g ^$O1w*2HIkB;Y`^ g2`!<ˁ83.80RFJ#0)3 [@:/SHL_f)141pϏaX6VcŘk(og+h^&lki;_` P"ûd܂|x<.£7鷗!Qqmu1QMeU\ߏ2l+>j N-#vö*~vWҬIDs,~k5ohtEb%Qq~7Z&ۯ㮍d:FUbqmr7]9XzZ׊`_JzYCYDoH\ &`+0Ѻ#<ʉ TjL9='fYOa]s Rj^e(Y>,Id !/G 2*7Jldb9"ʩh,ޒwCYSͳnr^l#w=U(Y;Jvj~o>:coVx.N(JP7UvsO`)0rȆiixF!ֺ4|&w P._oZ/8c'ϳL5l3PmL{ߍwϡ9r6JvoasZ"~=6Hd.|VD])Swu]}{_.W{/l*ĸᔯDWECwo]9ȸ}[ߟJ/<^X\}٤~y[FwkCՔz{ j$R*簩;J`<]Wsn{s||՝WKF}Vo3<eQy=꧿ 6x@?"-L25 H[i6:.a|i&T^wοS`9ܴT#%_u䯬ħ4n?`zcZ٨/o?zZd)_xWEw^;i]kUϒ)[ϲASϾ :c?7^ezxdM9'^"9<o;Sp/Ep>p7;8]LlFJ繣xŐ| ֩SO##L; FƷ1v{5 5ГsyuQ'"I%ddHĤmaUrXH%I, i#b%ڣaNa5v\"bRB AB9SXKy\z\dTn7=`ȩBRcY%- w0Ow2}Q9(E\gdCFR-hp[9tɈ}N4+ e`{˳ }>-uK@p08ÇCt, j.j8Ġ1,\ ԃn {Eh$F£Npq )ã18tp@+TD \. ! h@=ҍѐc8n(aLN"BBbFy0lc(Xb>;7X[E.p0QAv6wE>8<-ͥ=W/1Q{?og|'!@'3|gD hب֬Vj$KD,DžṕT4Rݶ |ƻ[%@.ta IR^ Aژ't~c\*8]ƞ8WT ւ@6ڼ ϡg5R""&<Ơg >u41h Ex5ZQeQZm>ŲqL-Ƹr pO@c`:Z-Y@(ٌT Xe"V"'3r^Nt؂n 7|9<$ i7EÖ& ޅ5Dba#3sFc&fV˒gC<S" n!prE(< 7CAr5j&#!F4aP{ɱAڂ9 m|:% Nڢ _67K x[cj VJ9Y.Ys p n9X EEȨ(On|cc*䉎^\J1T\*! Q[i=L1.Wyj%KQేGcpdL8ձIP~ wɤcJP^,$"8Vѣ谑N'K"G=<ڂu-E.8wcRI3g'Gcp*g>I;dH$ǡ16ɳ(F;vbmWPUrK$Mg Xp3xS[tP!Hp^he fzv7ChP_jXƈQ]G7y@KQjGcp*Ņ{h=J`$4jMY?4MyE:Gcp8_k\Yq2j@v㩘\ &TRã-8-,/;bIMYNSa9hDz;%I#QTY Th즋r(uW\X$D)y5WXnTn{;nQ_k q`t!ރR%ةqB@1i oN̪6`TI|GcpO`lY2\LoI!YL&,рLRI /O h`$h,i0zX ǖTWh ?Nrph /i0kGcpІVG~1t;]̗]{Lfџ2m#2t4X< bqZ򚂱 5a4`SQQ{kT 0ٷhD;Y!c4: <_whpL0_Gcp30tK2[-|ˣ18rΚၔrjUbc2z$ã18J-(<]j:`#^ ^:c.ߎcFw qe} n@[Oyli 8.Z?nF.bgCcB[ֱiᢝcS_>j9 VRqeI\3]FP.0D5Rq $po ڜv>:Ξ?BFF;;v]\bhbh`+DQ6cA"s+)0kv ڀp=ƿ9_6uO}PwlZWMH.19y/䝴ZW:CNiHo mX FE2>ed0.nN5/cF=<^c^ mXG+٦J{U2l`5lТ+0ȣ.J1 Z!%RĆ9@屆3hp;ã-8Al:}\$nã18 . i+xJ=<Nއ-[;$^K1rSh_=iX-`a1,@zr2.EiSzَ#ТŻlڢD+!8F$܂n5(eʔK c-w19 劊 G/_˜AE-E}Q Cӯh&uTju3 KD,'4Gd8с(hh #|Yq%l{m f1۪c6ˍS *{x4$ثqܯ~87 n~uƷm#+Dq@5K6@~ldJGJNܢ͒ԇmѦlJv%ogmXsh:qJ',}N̞-}o֕FuW湫ď^sh?g󪧿U~rt &|M׭fS$쵻|3y=:/|Y5TnUqw]Mn( :|yWxƺTN7=[oJ]TADYHίuƏ˛[3p<Uq^6Kɇ4(/cQ\m8MAJwˮJI0]{[f<7Eъk/#4Uq|5ʤeL/jg8ubQ +gg.)3I?L"T-Y}]L7a;:<1e˸֕ҏa>ub_\# [i1kBŒI: C `dLXXP.ԃyr8Q*,0^Mk CC_{j q^[hr ZjE;~~ µ Dݟ͘~h\,XTѕ]oxqS*,/Wy RGxQ9/ڰQ5L3HP[<)X{ΤfqUf zsЛ7 E!L{N+|`\ ʌc՜$2n BoP m 7?Ev=[mV-hq{7QGwӖ-*.:S\ΝztA3yPSkAC;jȊuP=+*0tЉ|ua龫N\JVKfsn 4bC`\.b*U5rO 宅HIoƾC&,b7Iv yHmtv<8[Sx~Z{fEuGWn Eb̶ᎇd<5yQ+fPn񆆑GΠ맼m&X_Bh.aM:b`}j96}isU8Ny  IGK߫m"GXdG#VWRU;Y}t' W>ipx!~?)ume~2L☖ߑE ,\C8yU̠'=J68}l2yS:jmܦ/@b.)žW)ʱEOl6G#LXIFG`FUȁ`&3;M6=NL*-O=$[vg# °3Rf,%H*$CQ%561E1@2ߛ,, ҡ}L 6Uz1Ǝ3@]{)J*{T .֔Nq.=7b-Dv̡ lW|OSUXѽ6Us}/×Ӟwgbಌ4vB*Qs UH }[UuO */* S()/SuR`uso0UlDø;h9bZ:l^N>V$v_X>~ v \)BxJd <b r4Ǝt/Ec5hűzA,x\Y2/aX8=WR|3:KN8c,5* Am|׵ACj^Mxdz90} {ԫ19@8g*_G9W DָG=aW+ ×R`ЌzbSY6q̆;fްlő fH1<ȋ1H0eVrnwKc8^c糳\-Bba DgF`Pp _D`lA˹-w1s# F;kׄg8T ^B{9=='gx;zپFjǬbG>2X:h9pssJ$5Z#C ]TY%FsPj3z.=5ҢfWc0_{Ǝ؀oS3/lmLQhx}v&f%F )>c!\tpz$go.+-3g/$ڲ "Leu%)[#b`2|j W KTѦxAcqVWj2e*Ty.CaZF~ZI_˺f4>#UHAn7)4GjhgSe FnT6wuQ׫}kIO?ky:C EVJm{A1E~$.|\p[ c<|(7uBxߘB_HqG,᷏DSʟ0?+3VGaiz9Z烞gIh<[\xUIy˅6/5}QWMy◵X2AG!&WL'f]D&qh1{g8VO$ҿVSm?ctZNf0y9b2k^hѮ".&K.'B JƜZ`dfB $.a*=isjNޑ0bȾ.y/n_&S]V;?yG_J)Tg^MSL-.gB) [~朿K̩m4͓ =+;7ggÆV|>"ӳTx!)58}qW=3PNJotEtQͶdօ5Ť/`b4&2YZy>erJNNu1.sL՚BG"a.%_OJ($#LrN7:Jҩhn'68~'/_|/|{Lzq"}sfWGC^ @>`7H14[C9=R2;T%:X+~y+K^Xz$VZw?ͣAp[:U8"F@XʞB2Xﷄwz˛ѢX;ZW퀻mq%:QMhc⃯;8Λgyҕw1:B4,wM eF?:M:n2 Fc;C2bp"2 4 (UD̲u\a9@2{%"RK"yh!? = {e6I,Aƛ@ 6 ,alcN9%^Lrb)qH$PNgۨ`v%z1Rvd5o.{BcD,k;ǮLg|ePysn3%X)Y GX)|:q*c&@ofG/fAXgjq Rͧ$n.H>ԩ2rQRoeQ%FqnkjeUFtؿT~ ',~bf'3 dm,sV+,`6{8NV jid J#H~y]IH`pQ= UPBIcŁ@fs] _j̯s`q7!V5%y{`(?X+KG,\eeB M 2;=[zB{}2w~|v9uQfJL8Xۓzj6 c4 ^@8|L2Nd"H,0ݠn4˼'rA YLd/&P3(Bbe29pv3#,elj'+mXdsS  E[IBWOqi8Dc  FN ga/)9>Z h64>Mƙy*-\֌)^ skm`eFbc-,W0TJFHjXRa"~`l8=joA"~J<9Q'1gFnX@+\&P*3X<:X F0+F<"C셿b*.d8ê폈rQf@ kY`oo Dv"GOrø0B[ #AkJapXj@x()Ơ(^:` &mܰ|uM|6Iy^o$ъ:e@Z3x-'Rz2`Xql)6\}qHUO׶ {B\[c0CI iǙZ3A\jH2pGʨ\PbKb[ÁR1}>Y~Wшz<2y#hMҶ!6ycF"1ǿ9';{ogk$肸Ɠ|i^T`ĚLDx^M~.\j'v*-ƿ2V R%KxSx66q1ݙe?Η[*H5gzm|͹oz:}uaNOM;өàǝʤߝ= ]moG+>%Xo/Ý >bWDjIY7"rH%Q1*ʹ#9&]%N<Fŭ.C-9N%Өxt"k&Z[:08V_^w Ve(3ܟd/qPn:B,qivVNYi髵y_kyuF9n]E/vH47`r+G^Y.5뤅wV%wZqn=෾Vv ϋ; iRg8tsbK^!e=?(Tc78;֍qwPۨ +,N#X,P qpxq|;":~СLĽ&(t4'v4щ'."C10J 9rcܔe"f(<52}oЎ|͸.Gn8.e}[tnӆ#M*]Ck}eσ]Dzx}SyŇk]K\9_)\\QV0?2p>.[U#95ڬ!XIͲClY`b­]E{=5^yh޶yj~ +>]mu͹ ϣc&xR*kܬWXs:iӶG8-bܓ78Q-ք-hB0[yx>4C^dg>;y@1O& SfPPR~0s*9G|^!8F@I5)Y Ix*E.5<9\)u^]?S^]? Q{8jKUő@fX6/q7b1X 8rVh#S]d&CQx>Sm5W; $#ڣXI=Pn0# |2{"sX>*Xƌ9[dI0ZKRAC+Y PXL9Nٮv6^%!m_|I.V-ʟv[mٚke^Xևa<{]D*D~`QXQ e")p#*LD&{]qz@| V]i{(tQ};+ m1MC̝ar*S&DqS5tTAl9I' !7)}k)^wjy@nOow\b뽌vz(M D*G&.\< 2޲ Ù9~4O^WW( GqohK䓦>涕BSyFW|vD*D5 m/eӅ>UAy/7?ԫQUY/xak:wG]=7D_ Z'l{8#+W8I@0,U0jSm9=v'p M{dy90wz;)W&6kQG*MV[2I 5w0S)8(ké3zJMg_zL H~NL5-,fdٷ- ƕ-&\"Ҍ<`. Y r"$G;"3 EG"3rJm.n/no>0'E9U8g̣Uꩻ?(2ݟ/iJ<`CF\p)"h{*zp:WF  d^Amd4NK/PJf|/7ڡAA_cy=bβ}B*3V) -tМhM',yc)ŧr?hec,?YOvY%6fUUfi*޽gmu/G#illlkVY~+ҋZP%"/EdҒLӉjipZR4|^DN~0xmlk㽵8[+ޛ+_6C)=yW{ۂ^.J]OΚر8a)Ku 4㕒M`T3 7GOfl/DLгXX.mSO~`:rڼW'iaF~s$M^PbJBO޿{ӻOF x.LEw֣O{s"xzDSsaI4u&d@jL'?՜gytr2>mӉ3Y?qjPV7Gڎò*_k5̭݁yGWU+ϝ@%+`I{W=FqV)^StfJM) sU44Zt4jZtA%UZNHZ >A/T[ܶҾޱ绳ţtOw0[N CWf|JuG'ot mm006]tI A*%E[Sm-{LG bndqJ8pa; ;e8}_~_OUkY<o/nƿIʼn* E Gr]u;!{[PbGB.Fq:%<@yDhiK%R{THʃHR=uJ8;D9F0!3yj$ïjTڈoH!IW^^^$x8e+"q <=$VhI磢t.HIL:%RXHl&@Q;Q rۿ. {*; 4jiM": $ĵɨ6 #ub"£12@5,JpKNh4L11;wꎐcnIpBAƩ쨖4mHLNGs:. nOVhdKGȢTH xD`iqc ۉFJ  9_X`Rq1 mX%^v) PJb J#Aޅp(L;שּ^$kUA3hdQ r: kN*/q*wV0"#. :T5WD6%;=1qiAEv" ] -rQUQUn9Hw%0֡r¾oqQ Q)(<)x,"{Fe]DI@D5aޚ@- D; Vv'jtZgA˴(oJQsB TnjQA}-=0XDα݈F)9DEb1 pe1($orj 5 9 v!ZBT&.P(|ءP%U(Q BmRs TFZA` QXARQ W"y h~aYwϐMKvɨ*Lj!A#mXi*2 T.kl]s#9[9[CΖyyYYd⏗w;UjbBE%ɔ$Zˈ0_3N{KJ1!񁧡&Rڹǫ\OX,(g{x7N|1E'-IU'wEV9X{+lq;#|6grfs>.&}l=2U#UƆr%EWJt R(R%$3Qc1YO 8F+ MG .o,gdzFFk 6"ZDz|м E_``2̨МH@%~p /P<&A-@Qha#ibu,fˍxefFLZ{lzD )h +y(]DϦ`9~jNzw,P =v*TRu=J/phGo_ums$FR,US}o3jьRy3'79  Q/޺IQe^ɜ(죌Pֲtk GNpc[lBm"!,W O_408H.ۚ*b%*)Rl̬dWOo|`;9[ll߹v0yؓj<3Q.A:uŘmrP {yb[垅%^^|#-6jA Oˆ6 lގmW7kr>xĔPϕTpI ozc@*h,T42@mD]]jʻqs8bbD̵)A%Q5sc16 TiR)a8HHmrq6v mP}6\`6go7;+.~jr 9-%ũksP2\)"Xd^ ; >͡ӮڵQz<9/Woٝ>Ll0m^An~Ii; (1@hD.XMё XkqԱ&HqH64r'U<[a.4̥DT  (MGVK)ŅND=P_`*%*fаG.N:{E:f O<Չ)abZYYleaaR9 R:4ӴNNiWa'Pe\ZCaч,ܗ*j߬JVR\];ތZ}G`8 b]!",B%L4; @A -z{/o($u|دgJ nj2w&ׯ_EHUeM1Ɲ!L=UHϿ:t; nUm~{֥%jײmf6]WiK +c!ed6l,V(@ޑ!PH>,g̻%J^<= Hp|EJ`ZUFSK8͖KUfII *^a8kIwWblba5Zb[Xg~`ӹC:wQWcx[ojU^ոA1ަj.uыC5 |ohMӏ:y{~Wȝh긷 !tnPY6Կ VP/7"ؙ&%:bq3n:*v>R.SmaY9%*Pofhʩ*r{+Q2"wS,_C#xpX"4Eue} D+GNZj)G:Zˀ(MyA=1K- P\z<)yh  zf :g)eD"2RP iuTs!vXJ,_; _ӁOu=勱?O}OP_K{→?}0$nևhYoXe^|xaMd &~ڇay.P:I/ K&a ٫r"4y K^%0 ?&(tl\].3y9Ŕ%*rz?=`b(bd>'<*OCsׂſ\@>G9ۏIWבLo| )Q~>XUW n_*(TL@߹V`EMPD^>Z&W\9Lu^ռ 7>[;&<-'soqп֖ڮ9AH?gewf%߃0!0"q$wt4 iF 8̺ *pT`ZDFCo c=n.igLb#BG\~4qk FQCFՁ#N{: :{?qoww~O~>㷰#V`]`;x~y3&h>~hr -|z[KIGnp[1_.<|Xd03^:^wpO,#0 Ddio[rVޗ`?n o:.௟7{ &&84;)`gDlSıQ=_vpv_w)`L$M^!eEԌlrbw-ht9NȈ6Hh0H⫘4ph "-&QG4"Wh. %^[giG 6SXc{ )$k$GHJD%)L2T<իM6zOG!Y G犉XF/եƟ;yۼd)y6mJp=Otgc碦`: JDŽ0 c&D(R]]2-07PTXkB@YXNdH0>+eՁ*_JJ B29ǬH#T`Ő20[/̔5tSYk;8I/[IIZ]C0 4Xh82@#F1L9Ci9[B{ʍ>+XوՖX#R}#בa J2`GҁDABf6`ug5Z({JRM:AY Jut* ;D(L5U09!: #; 9#Ԅ9|ah!KQay}D 06V9`0Mn6x)-stۊ3 (p3F1D`9DZ4@^aBictɩo"J7!:͂o$$uq&-gز[lTx1FIb44ȈuXo%s^ĉHd0֜k{۠Slvi0 2HM/[flw$A \?H8efHT|88ǠHK䐑D0P[R 2I(ғK14Ѵ šAowauQ7 ڔ} ڥꋙ|(ջ^o gwݦn^eikin7JjR|5)ޘ["9j(Ҹ'm|tE0Tab%,"F*Qi W[{ɳĤUL~>o 9SbqOMP;ZB+ Bz 5Xdoegv䊝בs9>P)%Zw*B81TG8cڂfIHȭ%ysp za>ZXeTY tHTr 42B;GA?RNbw֊a]q8< ~U[3yXNMP}q[.tQ}m58uEw=>A$NAfW$ gO~Rw0 b"@/ E:cmvWQUk}qꋞZhŻkQrBkwԡ{<\+|bup,EsmORI-$b{seɽnv%^&"r=j5Q>!ٺȫ+[G;=2;ށxp{ qjf >כ[EڡZOR`1LZ`17ڞbTBЙ1dy2p9QЬGݞy[oO}U_)ѢX2)#F]d,e=^Hng|H:Ģu65W3k&86r'#r+Bʢ(!1G4LT+FZ<ӈ"1QV1KX[:ڬ;m;B.ˉzRZֻܶGZa9lPܿ瑟 1cugDp>2g34ܨE"UI't2&2;)׌۞oG$ KΗPdWx G}.8YyNm?Scfj Ū ʉȘTyt+LǣG8z!/!Dvwܲ":ḞU[9YNaOHu/>-OӫC^l.֮ѹ}kUeƂzԣP꼕;ŕԶ8t*3m02c4\ T5:$l()ќ7a23z=Ymn?{ ;O&3H^Lk׊וP'+ 9NqHudmtTp|sڻMޜ'?.K)=G9Zo>#aW]cq|aEynzԵ/hYw=3g-#I䮴]nugw..J6L~|d%I/IVqSHM,*?L'/ۮwEbs1Y9SOv_^L?yRH %/g1'Y[$i\nk}#tXtLt=\u5[sIߙZ5 :jj8=?͐O7rQiOoN YJq xPx 4ΩJC8{]Szc aSnj62A[T%5q`GђfBxC:@I~ECTܰjG>=@~Mm )q~JrhE=wEMlUB"'j`B% R!${^T1-?=!y5r"LK&B+F)+"&Ί7$R0~oS:Ejbtx3Ÿ[~cf̽fcIAW+feIڏ$g|2_~nH^\[3dA ۓs8`͘ќ,Km Ďoe!Yp~!YX1NDP5x-3Ҁ@xnDOՓ_!@^Ŧ~-Z\!HRיJB:N=Phi{JM*˯$18btƶN09X'jrE.8`Jdɮ1Ph`.5$ U<9%Z+ R7o9BNhhl'7Ii<~K*rT2GND{̓#V;Y0俥*eZ`A LFJwl,(  g!ZG[ F(K^Jcӧ0.cӗUtKg} 㜶=<|>c|: &y_z -KݼWi{(Ǩ4@R*e}Bx%X("ZU@=#JO{ -FNeנ YyfYY9U;B F6?OMZ,5V5NQI"#0Ҷ7B 3ylR&1ӎ r#Ph.`AՃΒfxJ%yβ{ d|!ֻ͠c [5dBx.\ki_QCNrT{9B̙sFbXg4HJ@ D\Bxޫ+Se,2<),l|ϔO\07dٸnu4cu΁t@.Ξf:WK~N& M[Za2{zT/xj|uܱ2Dž9.o0x ݳN#/и&mŴjjҾN}Jmm6OCa/d#mB5nt}y/F)ܝ(v6mq@RIc;Һ \҈G, ? kqsɍԩlCv!0bz^buᑻgS٦ZgojXGCڤawMX??xF_ؑbik*Pֵ-ZU^NV<|cE--F`]^t6 g~-CcEIp5g$v+_WY|5J8_:@Fa^~}j-=/uY4N?|aq]:r;57~"&Y^k%mB˺H=o!.)[漿Ozz{jJUzPϙTȾT}ov+D}5ay Ϯo& Qd-q]=ӊO}|?m^ͧcb\ɏo|˅MQiBm;w#O0!n1\lz_\L/'wQ5?ѠU?aCɡVp27W $VgB{+sx4kξjx!Ǐg|>%ixFK֞Z{]D9$-% ߘoʎ}vc-vmmz-/d2_2,QrM GeE>d+ygHLνڹL[cZ۸З6À?Ih3ؒOoגV^bɖ.ߐ3B0#ڸFDFe$2`bmTRw0 8*E3ǞY{3B1 W'dLiz G/G Z0 wv;&4?7كʹUhJ}F~sƯhܫ>jwel{5 WޥU39pt>ƁX_7BY{wv۪fi藖ZfHv) z,CeTr:͍̕1Ⱥ5(F^D)uN,f cuuBGgM[P zf :g)e4]hJA]($Jp弄g|P @_z(ڊ*h,7oaȝ ccc1X gg):>iu)s.tfuQ!apjb0|Nej0f/ zBPCC0Af\Ja%X9* 0v|ǘxzGm/#+sMҗ$vS|z(*vC >6 ;6.8IIf7'$MZ佝!O01<؇lj,-ݦ#C>Q# a^^S4@W8y% ]$n~|f^%uw1|ɮMLk\_4ON;?)TD#ת4Y?k"kEV08Řs!קq,ݚK8B|SbUzC Ws/%ommͨ Յ;e5hl-]55C76Sw0| L\L@߻հиUwsZV ZjɦV*S>9CCR~ҙ1>Y@P)bAd}YYW\~wޤ߽<:{&VoOHZ$r f/hSMC{UT6MMzR.7;;}8Vvb/~_w.~yU%\0,ܭwpM,| pAy^fUζPf/.rWP! VՖ𦳪7L~Т'X &}WrvhSk8MCń~CiJܶ* |7ũX5|K fDB4s_Βrw3ίX Qƥ3?LŢ2~dPE0C޼wjE`@* OkbQfAsA=vS,Kn-e$a ~3g'і8Jר̮N\`dRʯ8[%Ť^]I(Αڪ\\8.rhlHZPҭh ske 9xeF7 ͩˊgJH1ZGd4Rz`DڛT`X(QI =C,osHC^/BLXD1#iNU9dLS.ؼ LY_+TISe,!"Vqq0&3׬AHhI#KRE sF^"Lk-+Lw\QhOk0 #DFdU"}ATZvQ%*Q=GDٵ:M/.{@/i٫ ]BНN@גt {PQ6)j`^3.ejUKϥٷ͏/NMw24&M=&\{W+?;o_wN:q$y,L+dNXHYFi":u~/is޼s@J8H`,謢6b NF﹂'<(F1RLT>E"-Jy4W %\9+<#!9&Ats]PSjsLx XTsX;-`&@܁:*}b̞@#G``e4L &u1x6%bZcrW d&]b+ XS1u^*ƄA)C=InL:l.k⼍/ua39XNR{A\!9v) PZ^Zz鬲s<S`DXZ-,ph!u,R,8joJ"}*Zv$*9;gT},\Wo]n sYΘŹ '@x%G[;HÎR6k3 ^S|%)xY9UjŠ7Bm5oۊ ,9/?)Tݖ*&JFRj*5X I#SJ:J2/*Qծ#DP 'Ԓހ^=bسy<ɎʓK0CyZБ SLgwiEOmeS&+/Ά_z.M5(]6ؽ:iN+[>]]'*:kU} iJ\ѪloVD& 1v< *Tr#u )u{zU&uJzq?R+j9r {\u$햺K+qPW=ƈ"w f"I a Va̤<=W"Fe*MF;> g8zP>%?JB [ 5`8 H& ` (9uD4 %##Gڤ3yfBLkLUd2k6'YB ;Uf1K0t/`X S76Ծ`D`aj_쑺J\퍺*񾨫D滮R\:種[ިB{xov캺JT2vPWP]1U{rU"W j1ytuRն넹Cf1-!s+9kB?qKݗ^UOEtޭ7^W o^Z َ$=ioHEЗ,r݇I2vЃṄNOPd8WrWaZ!%10(wTR1m#! dPsTIQsB]<xhe/<.0++9jpRW-9v H ՕԄR`Au4ʃE,C{vLՕRA]]R~ NLX zpU4YZo!(uuJkUNx=nYZ+ Ygd #齫:ctz{W'{ꙺ5ԕUssLŦ~,%'|rgB.=Up0/4 䝕3 :%;^w=4w!ے=]E\a!/jȰC qI#qͯ|g'=z糹 IJ o# KğOZ"HW{SǶ171C0>I1%^Oժ[+gFnCTX@p3!&db8.H^n5ƈN8SZrFEVwFsLfVϔ9ȋR)Tf_ZÿaP>ov,٨}ox?\m)XU#5e>ݛy0$-Z4lq;pG?w%(bA)13E̷(!KtdQ$ۣIES!J|ۑ.(=JlwH);o3Խ i}3T42C壊Yzjc{B(̐&Ƈ1fБOpiNjtN8goh EAGˀKg8n:`yx7&^*2b7)&r咸L9(:Yj?@=Bd3S'ͱ#\sH(ϠC.27dZ\Q婓[j4E(!g,rcm4 Fgh< L9g"NRN;m1xL`ڊ}<vwh) X`iL*tF:G1rq"sI9&uQE- B`J"!cI)%C&LQP1Q)X>?$p\XsI,ӆ#V9JɳiA||<|J Z,*7G#RHj9'5y)W WSp)6a==UBjP@xD\'?z3vW#Koɠ}<ZnwkZ=̊×@zy=WHwfu|B큼{Ffy)(-_.h "x|1b-èリ@_a鏕Z峔ٙ!]\WT_6'(,ׇOZ>Vur|~|5Fg h9EeE| /SÂwHwtѠ ۾QnȷU,K>#hו=uיzJw_C9^ x1Eh9Оi-(Z^|Է;HAnm:47q;(9 " CYU"XU@KUP*l&hD <Ţ:4+.g-cSH|z7]|r˲?'TJu!jCYeoGyAHLLyp9yh%P*6 4}D>n!brwu,)A]]\Xx )@E=@[Sv겵RBh1Qu>+{ͧo>}k>|b{uY,%:a֩Dg&$sΠLܒ-nK BV2H۰Qn;okaV/ S>>|,ro.ܛngcVrX˱tks|˝/\0_>zh?8/TEGa6]\}3٧g7/>-@,HsR#ZJD2jк.ڰCQ_dVE`فKG TS NsW|o?Jſ> $*~ ;|SX6XT`C aV7 A:4X/ATU7cf 8 Z7HFB(0Y܂|L/s>u{ǵ@| /Z OgI ͔-ɔPتT"ň$a\Q9.3=]q~^6 ܚ/+|G U ȣ;/6}4/ߒ(zߐwMP#ئHbYj`~fe(׾*9bS!ilyed9\҉ 7|Q) ΅ \!c9+i/H@4?/IF8sNcs!V/TԎn^n$gTq]*rcg~Y#@GUA#: ,c!EEiJg{/.Z{vĻS~7>>Ŧ!jSo<n*r`,|C8YD'uI0FHa:T$S6t')T!BC,H!Ow;&  2lL•e c)ND!%;Ŵ޿Ny⋬}uRb7GG(#R0$db 5y%WVםJ~5B.Դ ]5OtVR?Ggoftu(!]|CHA{ZnwkZ= '-~]|Gwfus;>YW,ނIq(_.hܘ˜+-c.h喇$Q%[ oVitT+*Oy5eY6OZ>V,OyπRyZn1r}Q٨启v=S&ZYfRvoqcqw~6+%f?_/_L,G -@ ^W"<(dO bTcLj3pJE{~#O}ݣ8>6Sr5Q& b<^\!vv CAv!bglz+xQ>+%MzB =cjYTu=c5<7?%r, RildMc8a)AtuAb2\;]oz(c}ެztK/h3cD4EXѪ@DsG]"C#-pF V2,%xgrRc趾2 \j^ؕZh‡+O`\c>+6]y WATi\ TRH9Zh@pµӀ&kAbNH%#@P%% ^~6̮^ԧW < H@O8,zI*p5seB-. ".GQKe䟛Pn޼?{WHJq;̡P5}k`S*mkKIN;!YrH(qpH t!Sv|Vߛ’uiJ6[TG wT?uu]4-/ֈ39?cd]QG_ `_^"㱸TqI)j >$xm#4j37%%pj Fh|o R pLI6L֫s+>> S]r^+.;ĬVWvJ/fcӢI25dp<YF (ų5/(`sO,yY}kkb68 mj?‘P*8!cMNb!k MG wmre|ЪrB<2{{bޕX=L1s@8jR,GGp׃ {azQ\lj =NEe"3ί^OMbi 42,[+;R9ħʛKU 0N*V*Rc4 $DyPm2%aQh yCaq[*B# 2B\GX1np :5&Yӑ*Dt ˜Tࠡս4Ŕ4~"W˯wl5v\` !cUW8dsG:DxjV<~X aXel cϑ,y&N%],;JG(?РTu*[? )-K-ޗn'7$tur #WF!J!exV1,AR ľY?~H23<.b1R"t4 3  > SSpkE q e*OId,Ş[^cLtCNLAWcb|$8\L<D`FDwr89N q &c! !)ZJE_y^xU寯ON<2fb^k(³긋C&₣hAVqنGMvqB5D*5.x^9PCq #(x́:N^?g:reIיxu$Qiٕ ck~۪C#z擪 F08\s)@˟FrOBcNѡ}PKi/օ"5tDy<ڶY2UK yh_k4F2n~Wh [zq{]`TGZlpB0i?rzWO\5).4SBу'Qk|H,Ng_x,Ke}//s(/$LR"a(9Ei)$Pۉ1M#-vlqTHyL 9|Tb\*'}-V-s,m['/IBKo(IUfQ[[Y9bF肶6>zJw{Ny/[9I(3Z >5lkʊ* j2?X1~qa+_)㡴uME fӈp9DيhHmAB1N=s9zFIKd&\RbˉFib{ q[kjhltJ xb1ήP^Y l$uv v9;Q@w9=Z9-{ WXlX׾%&$LwDŽʳ=Zٲ-bTl=ߑѫ߰>#:f1x![f9Z}q{>fnI {RLV9`"=*h1-6I1{*Z{JFeaapm1_..({eAFHD6ϯU?{>tsx#QFbV~jg;^~f򭶢-_]+.hjxqEQ{ufBI,q+| # /gQKPok](w˚Jf`1-$T5'sOԡgE`';!xvYqϐS_\jbX͒A)&}&upAMe[e( "zSp,л#BvcyMԑ $NcǜeRۏg7ӹ~#Dw!zE}'M>s]CeڵBp&$\+ƉW)~'Kʳ.^bЃ91c&/y3cDyR*`d4-$ǰh64U%r-[㝻ZFgAQzSc94ͭ&][m#Cme\ϯյޢaf-C> (@eyк C ( 8"Y;(в4$7S0.ɱѿty.g'1 uIl_b}{Ϳ);bfw8^;,b{I\|yJD '4<}/9(lv:-POf;NW RQ%XY· Xzf")o'R4 /fIͮ Ʈ:n"q iFm@_Δ`ڗfIu2>j*o +&8;:dgiX]IJJ|<6 m`)Qfi@#Nh "0*ηzr Z{ !I w'x.Q9P ;RFA,,?@gR̜]n~588 !b9WYoQZЎz1dx8KVx6Rdt\@^([9LjN>-7x.Vq?DXin4UKrc3ϳiye9EwR?b$}Dpp-a4 +hd/fX" 8Nf;x=_˯[jxyFc;% Ow=abyiVi* 6V#vj'Md *0wHY/I\e앑IS%h\fF< 8ZnBluDwRg()/˪#GqjD!5)BdDQ""R[*Q*S!$ 9z׹fc |8H>cK 30,/>=0(t8R0`:%*rrQʌIT@Z82I4-( ٮn~8Zwc|Tm\_j4n5T^ Gܧx !#;aqyq6dX˞c{{rFـ e'O^}̤2>Dߋb}j+ !qiFZiHԈ>%7փIR<2NAJВ |{뎒͎ppiԡW #?}`@+AG ჱ4hc*6 mQt/rlW-ً*%y݇,ߴPx|8cTi5dC*>ڜґS,Ÿwߣb٭տ,_Xw8/?@j00=o(,))=a,Y C%)xYWVДT +,iN\-{ 'd8h%ǚq䆵1$0 V7j%16!=n3vt[Ę _[`S⎓yoJ1CyZeqgNV=r#q۲.ofe1$Yc4G_%Au݄n XsP-DNh$~"*LܲǐƜ?k${hˀЭ]??-e`a` *o\%!TGɜ/+mɜ_37{/Bp"HY,')2U{g,ɗ)YRY=%mо2Q_xfwS}I zQloXv OGl8w㲏[~xORiwJ&Sf^%}+Ħ+"IY6G"m8ֲ`.YeSn:Z/Hh;GT@a[mw-^ٳnܱ\@]tLJM* ow}w.OBgTb1gaćd>H k2~,g6.vv<ɞ+isgZǵ_Q*-=UYEI%3Leђ--S9˔,J@RnZ]5=-ϑV:<^D4i\&B @3(J'Sgy|o)wYbT<83}]_2& <;@1XLᗃt0ɖ$wDqƹΠa!]!^:Χoc q4\_c n$3BbhQi 3 %&DHf|~)znn)e&˛*ej#DOE:v3eU+ϝ*ɩepYh eP7e'z}߭R8Z5 (svF3AolDR]v=_Q-$BK8:'T.ò(c`"Dr6LQת6/2al_}*&͗r? ߟz!=:^Fer;Y6ƽ4|UePeVVڽߎ t{ܳ?>\f>Bۿx* G4by;P??9Dj3fҒFc]7KfK"t;*^ԃy1{_fj'FqɌ/iW,/vuT2(![̾D[B> a4b{EToyc2sqGXX4dlm֍`\8{^MP'>-K7:P&{фFw08)$ 9w>\G\#|:\jmI3jѼy8%$2\U4Vkޤ J+xYGyuHpRIVK!aϼ%$so85:urjAgKJu?gvS;f6bdS+uu-}:ȍ!xj?x0b2.ۅ=iHਠ%JtrM%v''F_`Ul}}Rj*HAڇa// ;+h=Iʃxй$˽a&]J;O,QGR}c%[>Xd3مn>|LNUF8Y>6f NOR2.uoEW- h'B-TGZ6lp `Ў%FY&?%l6κ^6)¾n ve3CoČƣo9I\OnHȒ\R҅{dB-mTۇRIo@c;UdGouZyt'u2ukj OAVor6{-,7fT VL~Nܒ/~ ?dXz: >߼Jˠ|+W-8i/f!ZisU  >AΎmb G5{ m:6OeV؎XxF\74u3@Pe}})iSA'p RCPh3N8. t-Q!fy9*dSI:Q.+v]I(hXIzJגݏ_Ɨ7ᝲ|f$di!fR0%ibd*C1=v ۶cNbvRB+&Z*˅3?Ӡ8|9s8P{%`RH&A5oe$[0B;7;ƭnUm+R8$˲L#gȳD$qGKhos".GK([|ƮwU4 фt^Vt]by;{`镘Nm{р^#דY DoL& Y‘)7.STAc1X<aYڲ9*qooܻcdp"_ZiV`lR0ա0cXM#t&ǢF7K10&Ga9[h kk*(?H\+ape9&ꅺֹYD!}\(gUN浣y{*_[Q)8Ҍg%CE 825*YjA N1"1J}F0m{\9GϏuRvЮzU.UƜuh\E]_X}UKc=$~r,Bk#/blS3GK๟<ϝ>%q"It )Dѥ/:9NbR9RT1%6A_IJ'),NI Ա;1G#Ø1B)1 28V0jrۨRYS/ʑzl r2awRK+$婠qs\t8Af%#ƳC8AOs>?҉N>OQeDu_Ȧu;Ԫ>|o)ʒOl\bz7fvYat8^@H"k5j)!(TPtd-֛.[P0@A|-(bqwgH'"}/!dUy6befq%q{Y9ˠЅ$z8^A_Ev8(F2*EgI2+pxLS*!ԪEяB {wRA=6b)gfPP@.14CҚOtQ3{NVc~|t_a8#!Nw 7u nI"i| `yV#*]+, ʾ/"l"snL|rhwJZƠ{1fekt٘Ф9DK0o)"V!X+~6B;t/H q4\ 8iF5C3,BfQ}ϒpq@OZe=\+iҥ5UOmvLiJ`9\Rd! D aݖkPDˢ/ghՓuዀ5H>gZIHgm$-Z;3=GYץPxsoTJ][$Dɫ-/Owiɬxó,dE"r1_tMGW4~l}Tq_#f[α dAF@R2%[ '+h4cWnUL FzBe*IXjLr-]N;1)H[)t֮{z"afj$̅čT#H+e")Ģ\A Z҃kޕ60A>,` _& 5%[ o,J"Ȧ$ <ꮮ㫯β y&qi|o˨kH=t靪 ^ 5p}c=+ÉG"J^G~.{|~3/mrk^3T:Fû8fe6@'&t>\3y+$5&BobS te9%H4ѷ->2)Q|Ҩ#߭#Mw~ŲcYDc)l+O)>@\˻f'(! bp"&&U3'y[dezL,  .$)e?_s!ʶj.morbd^;Lg(qGiv^&hDZ1fb v~Bo:˜^+X |`0w#Uob} @]w芥/npg Ǩ}=\I`ۀC)zljd$d!:Zz=d2*bVA+N<թ4 Wo0&]vu2*Wx>7ϙx%3@$ !]FQh%w]*cU! )W!?e^t0_2JaފL@<׾h"rJt6 {<r^cw.&cBt$}R1kxxɤ(f-]9_mnW{ZMXiAQrubk`e摵VK.2R VAc.giچ$% fx-b=F7 x< Rp6y3sF 4w$VD^O6Z~h'`Bn:cXk{?/J'%JK/!5*es4ʬx-Zш-{xصY"5PwPRpw!2*̅!z'[QQ`px|,Ԥ8m3YeNԩ5٨=zm񆨒s{~nf4#Q,bc"w=TFY^4I#;˘ S>~OEl5՘ƨMNFҾgz'kʦvYÈ6 V1_7;(@ړyfo_,ϺӰQ(maQ'aM*y)C /OZQf \OƱ vG 1B4Q(}dmMUg]3"\y#IZ2rtSۣ22*jI`[E!s:QMqMy趉]q- i6ݞ8Oǀk"b#i!}$ʌϳ5Z4u IQqKR^Ađ̢;DU6Xw5A!3?C)<+!pOBjҰ.Rf~H d0sy&APuv__FK>E-{U\*]J1ێ1ukĨ~  %9&W$ʀ !w1&(St\TW +EJ0}\4@:E_8J+ gl4v"E-ÊD;?![OkR(>|,4}Lp|ͦN )ʠ*echq Y܃}q7oxi B4+ۦt g5m24s>8a3p-G 2 I#'iW`_4iuN<թ6ߥBS7_iah}:(V7; r/ߍ֐+~\C4+2Rt*Bt.dQl Ւ ށ l-c!PX )W!cGX,zm4h|󗥌~·b o8=+Z!fȤK(I6oHB~qxwQkYZLzխ ۛuB[}+7_wVI&_kOWq!B4F~h=J?.hZPvBlce~e첊9zmIDb"5Lq7>/B\hmEMUW\"HOTQ}Lb{MWgo ['CyHjv\Dx|,X :QǏ%XS6A ̑ "/n.#H%֛j->>6p))!nG]<߲2AS ]/7F UW1{Vl$b3 WǬ&B3K907JgTmᵯ/=.vpW Y&=STUؐػ휷 9 ea˼nJ`lCCHlkvbϾKW=wFi. !rhnݲ*_AS6&̖ :`DQS;m?6#feh}Y Cr$7ٗD.%3b_^;׿Fmrf&l ̋݊f;|93rpʴYBۂv]tvOE1 X'<*X͇(Eh*` Ҏymk?ceU)#FhVS[?`u՘=& -zʞ2NJ [{Zk%T$60UIJZJg{nH*=“30\3Z )iKwcU2R~ 8L18\Mf}v@6 f?XQ唔C6Ob)dGNթ+}*֌ ˞!'5[U+&qFGpGDQ15$SL!B!]$9˸hR)C*Taf/eHtJ1jzQxEtTK\ʐK٥mKPEB:ُe9Lwkv<6Z jFQ "a3mT8'2N)qAP6(Lv"\$/bz\_<7qK !u-%D[F[0>a \e +X12uz^"LWxpwʰdNuBB2F8~Bm.gR._k{m)erjR09h͘ G;h6# S&- ky̩ &n[2̉]!vj+JqM@C8a')5NJnCh͆Ι`RhLaO`IB T&m cU/gC?]Cǂ3egwh #}|FP$[o-Zg\:`TPq* :UC}sܔ6CkL3LlQ`߾׎{l-D<7혠FtC}rTR0|o,a! Ur{yĨ 4{tRZhtDCcӼu`tE=Y$qoQ$CI+Z^Ư+dCw+z&SU,XD*B2uX@HIOȶ%ih~x@Ds?ML@$o;U`_@q,dqh‘w76 G _ UU!yͺp_q*s咴Nd,@"\zaZPR-QĶ.r gΈSl,b\3.m޸{JyT8 ⇧&'M`6k!<`$Ch`d\dpA9ǔ vC,TQ;Wg2h9mX5kI8bF#ER#WڣǾ'{][эѤ &@u΍`!`ڇ c' ł$$$yf7]?-dnb, Uޓ JH%ǫwQMt ?s4Mgr_-Ͽ^9׏Wi|Fh\pS`fdI{ܔoQH+EhqI@x!3A?Ua~7L͢d2 [|DMp6o L(Q 2enTpIM7Y\qun+l" _B5Bы]EzT"H 'ꮽ:bWk1g2m5IojU^Nzi|Iw+Ym_!a{Pr0w>툀n)@+ ]/Yy)2xkLNm|HlޗSޱ^'dLZuW)s ƬnQ㘍{8F$2t'q9 lvQ=n:rb K8H`ٓ >gHJ=χ+[jSm{0vTUb݋7쟖 d@cY L1 F~sSZ11fG kP(kT)z*c<ҝ9\X"먴)!1VʠKp}uJ Yͽ]MelmTpmNߢm.1)7 T Yғc{(s [ %{ggD堕 Q-NSߡ4lSQ\B:8vTL#ˀ!^h4viBĸ4c=DT+F)/ROw7q9C̽!WX+E2n'Hh a _2.*WIZ`$mE ).2gR5+ȜC x' ǩ[:[][ M2Ox6* ?S|`o&G^ 1>1'U_ufJȳtp!Cx[0G4RL{3Y[Un|bc֜&p.f'*E\Q-ۡ0Jfn?ƹ034e碃Jh%UHqQNƆ;Z`:kW+/!O_M j8s[`zp_,7G.͹xIuiADZS%( c]0( PZ :ؑoR[^ڃQ2̬RZqG5!O[9-5zPΐUX]Oq\(t/8ԟ{zXNB=ЯgU4 niɿ~|Z,:i9/nf Gv67vx9/"HBpF)fJ!Y YUHVeA U&J(ETC w/Ap+Q?Z.]h; !3l"d?= AΡ,*%Nvmi\Y'> JdOpɐ{3@Xg ;EWҗa6x{/ko~{8a@يsBOj%r)oCNϚ7[␫=G3Gq1NJkBK)HXUF8 Z|e8IsN`8$z$[J]z~~ZI\S,quYլY&14\=Ȕ9 vFwS_+p<.FDl2Ǎ- os"s9UJT~6JR8" f(r E&I\W-.>g|']ƨfAAvZO9.񬽣D62'xJ(AP` bzO!c+$5 $'I4B1|CƋPPe$9l@xV?_z Of7QS%J Hx%(2BZ&X * 5f%)OH*`"\{ h6y3Mr5CWDmG8gel]C|38tFc#K1,ŕc0q- ;B\l0Vabl PCs$SEݖBva6 MJ ϒĠY.eKj!A9(X, <,na;\*c/5AÏ+uB,~G*MhhD ޚS,^EA(`͌:Ձt HN1g ,Tdʙpp1S+ڙF§t]Zǿy#k| T()v*6oŔo tRWR4T!r:lrM&j(G^u #i?2L Hh>q.[t;ءspI C6"< ӱcm~q:Ni.CǕFy@"-x]L,F׆Q9.tcT.U1`tgFt~dU:i29_'.f~0˚nq49BzSPE_r_'u'gsE 0Yh#t7)n'uES~f,," 68!@WV(H-wsD$l2bOǀR%A䷊-Q1y'E}%e$ulu- #9b|'8q2*`!08bG167&4|?7(1ᰍJ c1q Q}a`Y.t9R/جy5/(SaѴLs5rcvf{A88Փ{b?8(rD<@d^FDkVkF<mi:jU=`!_"?ocgR^ze:{bWB ?Xc{!#52I0+MR>&ǒSa41Lܠ[sK6o.=R#"7Bc%p^D^mWEYy_w/=_A~Zeޤq}>Du\!+}w@\V4!!PlǷnXbEb ƤD$\}#Y.4ͻ}?n77p7X F0ۋ9%C2g#T(=[׼x9F f~kt(+)^U<$7~6Vl*9dcf &53v&FOyj#W1G!x*Ϝ[Թ8ri]/N\$.u<q(uL|'jfO&=oz1뵰ۯB$~uLfGE Lyo֋*w;p5yaT01Jx>`9SqbzSLW1F[ < (4[ 5>sF\"lM tX2Y6OsIwּ3r'_3ܧE7vX+-@i6K!iIv}/jD'_7ֱbRYZ@na};닥)8q>f$םӴKxH1l~'J&xf`=i,Vd"y&if[D*WҲFүۼaX韧+}%cДIZ cWؕO1MfJКCtI1ps ha9V%DEa1oeHءK[s4gF`E}06Ђ1 Y5:R]caWw)ߏ'n1@Ц5гk@0vρK?0qQ4yB{YݫĂ#-g|.*A h B/csȒ<6,R%j(&CcyP彊k3Fv7EHO uWJS1uSmN"LGqf? rIUUۗT_dzp;NeG{}8)n\HU{ o_߾|:9 ;(u1WOAXj p4$k/ e}zG<_rV.&o_Sy{*ş&o?~ or`z4a 9 E gڬ,K"{D ƹdn@p87pVF=U8mX85pؤݘ?L \8Э{yK} 7{S33oV76k(&KqpK5ȁ%b*Z`,oېRGkp H]ǫ5Zr`Bz-23_?Nәd ף?K\"/'[QF.0x(Y,jݮ{afD>!FELMchX+ 5c"ޱaV.kltQ?,`piZHi-.@vjdRy~rqj =;NM.r * ?Eɴ*a sJCa4+c[<^ud KR5N׳E?[N_އ]GkC.八tyb$T#AW@r`@-0VYYI2`pPRRŎS ܽ 4-@M8 T&Ꮟ.S4% ]bV$|yN8s=rB C!`Jox1Q Ҕ(ARVc7.ToX@}ckoы}1;NStnƌF3v1?ؤ߄VBgO;$ڟZ`|'b$3[Δla-NbeP6݀QԐ^ NqީVqJK4NNV=%X:VNVbGB) &1lp{p^i'*/ Gq[m3%2}Dh"wIPzqyƫ@1rE_qxR$ÉC';`C)׹WOs:". +ÙvL EX^@2֙1|jFy/ݜ pꂠQ7|eRǩ!q7/w'?o~9P 43Lo*BkTZy[˥"dUHDf7[=`D[`|H| L?+c;Q(| ;S#<3c!(o 5NJ⧃.*iJT}.Ut%*S(v@DLH?tke"!CM2 漱1-0Zk#vBYc9\cLk}^g>o vAA*7LG`ͳ?1TuȂv*d/AXO=?)m|?~qWOfafk=w|ITBZJ(xeQPi؇[D A$cDi,+8VJTαTLhka?{ױ̱͝bf 0 E6 zZz"ڒխO6Iv.V}$4続q_A=pܐTA~ r@Ax惲&o r#q,]ӳSHk?V+lͰEƆ7Aȉ=zϼ]ƈFěkiEoqn/vGf%^VnGZvRmu" ݛN{1FޮYoʿ-ͼeoO>t;ޥPX\ծBh˕[vq<\>6/3bvjyMm^G.o9l-尕.,G+J3+=)b2mӡ)W%h.fhtptIѩ'hNu!a+?l􇭐p~%tpP+Hr̀ (qXŶZ5DABߘx@Ǘ[E 95;Ȃ qdg(0ޓuONHvԓʹ>g[/Gmz=i'svN?GY)GW ly:PF1qQٵitZ۳% .Ηi;n>SW?ϝ_LG;TrN%A֭n kipec5a=t%+NT  hg3(K{̴ޤ7U8`5ɭNmş4l|wUՎ5FYتn" XBƊr9pp%W`Z{STk5SEZI~X2>Q~;h9&}Y9EO1S<6or鿗OOѠ2a|A|WSV%-Jd{zk`0~CY1ju7h{77)S'7H?'re;Лl+IS'7a>P z zh@w2J:s)d;H)V!52] Ъ):h߼r97cJ>E}Kzѵմ#ChȇbS~ni՟\zuTA~ .@6G, ,.2Fˣn)ہKE[.ĦZs-ѾSN-77=7Sbkڨ5gZհͳ:Y֡v[fD+Ʊ]rV{Psudټ&#;F9!;Vf#ͺȮ#<>w ;z-3ǢYѺcx[vڭ7W=uh[6poQ,COF!u,zhł!XW޸L6m~)7iiNN4}|>(|"g, K1 W14١0?%hG06)v 3Oa|W{oy߄NIrDg۱dt|lF bRk;Uk։ BV;TH *-uQ?>r.; aԫVRhcv.)s%tޚXR0U:`dܭd起!TPe] ZQՠ[`>;viǩkk;FovӽsNO>|,*XdHFgLճT7-m9HdyT%bԁstgGTU*>LRǺ^*0E ],M8~~z?\=noFs]W$u Q1^I2WcKjSv*aE Km&ۆҊo>u@4v ݈Q,$1I ʦdoDeFA#1H>$5^7Xco˟T\4~+#PWxgp5_]^Ppu^9.}=W/X;f:?pODq{:9=<@%03-/V?Ϭ}YqZۣݿBxV7x.L·([#% |oR[Ω-2 *3orauEz Ǖ$[ꡦ^F]NqX߹xN)0&FG+9Wea(J(=n)ti/~{JT%T8lP':yVMAbJ)".'> f,n)z|&/Z7[bwx'&svQ&ZL-_Ɲ}eeu2`9?>͊&r L)uj:2*5bw6c!yۈ験[OЛ,kOlwF>UN45Gia?|qvܼF Jػ-)8kt7ӨgJҳ,@P8,>( ގrAYHfZ*wbcpNqM%Z޹_ M>~.V>w @b/Fo>qBATxuWD Лo#\s&~f:r1P2e. +Aٔk!y\xa1 5*/0["6,jֵ jRȳU>Ԉ:krUD_ C JBvL$z{&'vOXhyz"2±(C7!CP3-ivfZsBom'qц|C`sy3-yGdHݫޥ3j|',\x=G#cno(Ѐtg&<"me83~$`ZR7'Z7v/8wxTGl`{}spm+bgc}~SXvQ} 1-"f\$k1Sb #^ԉ t5jho)-ɡT*(":BX'^L>q4ڟ߇q@ qr;qE7NQv(0S'Jq:bgnT8Fbr"ⅉ̽Q R`. %ԳGܪT帅" )L , kۅgoYi('{ZYru>eGJ[HzMV!*/i"5xdwYS/4gM`G'seE6 .6c|];:x@*z98tb|NQq ٦3S/4I;^d-_jGb~ c=f8bNe; SjN>)0<=;p /r׶ð,Q'C w͓Wm.?8=y:n ?{@GRBxёDA=ѹUe9H! fΫT\mL~s +7!Ggi:ǝSV0oox[{UG| 5@3q!Jۥv0SHhGRfdrAf\+o۾PcXkmU"J卯"BJs`R]q&S.Ɡ=:JK`rMlXƪij%!b /}{B?,)"ةr̋0>KWzn/GJ4YhC(,:Z$rQXzdA(EhaTY Ml.0Gu_6%(!mKHnZ; 8<{jt)y?m`hlݩhB-2 zd KKQ$pPzU9Mw"ʹu }1D٣]u"}`BR`eӏ\T`RáY:8e|h]d?[ĀBujD{95۽U$H}՞WJƊrg߇~`AH鍷|}vGڣrYSp' +d 4@ DOYRZU#HͥX!@9(wF`X@Xjī8- \߷/~ixXK3ꇳLwPɹΎ]2"‹-%Gy9}q̼̊m}9krΚww)'{5y#~awÃ߯9=13r-R\ ݇׍!RSZk ^Q8o[ ͋ Oof-zAm*˫Qz ֮kMD͢[Ű&-$ŧ1βv_TplF &0vIm@}&q-^(xtnhKVtu=a.+!-R.>߯unx 'rsӸy'p^;? \A ̞(sBoHQJ{!$Gfs*݃s򐿍pc=oKKXw)mdtaE .GkT)7SA'Ü $`߂01颵v1Ojs$[lcH~.;eDCĆ=&1:jx76W`*-4՛̈́N-kj1A^\HmwJ. MݪHs[bKI=Pϊrbқ`J #.1:C3SΓ2-gq{fo\ DC:]QX&U ;eERk`) SͪsYX aQtI'vkj^rb,ȊؒmbcmQЌvr0Ym|IZ4֔J\QMlH$Q`)ՠbe|l[wm'GnKaw]*-T1aqRg=N/+Eex{-GŽc7r]FUOvϏ@oDvdJI;)]KsǑ+=Q`=lak:ԓ2@ ~fx zfz@D5|ٺ"zWBkŕ)dS ؜ZTT  KSi̫gyw/)`I$s f^W>B:nq14f1#`e9fX8em-.ǻSqU40U)8FXd ʹJP~\;͑ejJ{&!P]},PSFzΡSX(U.@`A#8sNWS̘e,3fͩYp9Ss)t"NU:11TV7M*% Ro-Qh~@4,QQ٠-**٦MdkQDë5D#wt]qRh&fUm}YeIwP-`GŶ<%7Ir uB]r_6SÃ${bˢe.onbw`wG#CcѹՆZ-ޚn?yK)5OP\A;';B# VInߓ\,P,aMw;$jgzD䝌-ޓݯ"栨Yj9YY^2sqHu ?6:/OM:=p1FFW) 1g +G삦)6SS)o6GIZ,& TGSRd*eۺKF凭8exLuIԖ̓R*R5v*K$!kmN 5N̯_+Ϲ_]PI^T6;8uqZ۷\+bGx,EU ??ֱfkƀBMQMJu5:R2.BېlUXӓUBCBZ="A,o3a};QMjǶHLd/f9c }Qv4 Xr/ٜ2(FlFqK:/SKm:WھW׊UP8Δ1Do][i^yԍ't|Ԙuʀp?t6=ӭZTtSwNf[$QZs*S @\c1qHBb}K $~RKXZPc&ܪo?W<ѱ?E]/ ǣȋqEj9TC"rEBH:2 lY(LZ"-А^Y_Yڏ:j)(@$\T(%P%2㓚yFNJ۩ {o\0Ä>;B-0p|V8% 5hv4~O\e|Cн~!5MdGŷ2k#!%|&]Kr&!)h[pH겙_{ 8}~D(1_ޞy^T!i#ZCwS賒+]9/$u}[A2/-Ƅ)(fqHbﹹ?QSnDZ ]t 7|7C0Bc 1X[In8 .Ȁ@Aa=2|&G(#û!:RQJ ;#p}#I]h;zfk0 jUo%j>ʻVN"=G?:ڸ8_@`93g Z#2tz,l< 5}Or2gu9֤݉Eh>}e+eDv8e/ ځ>U1Ѫov/LGV? L65s$6sl:Wh|_ݳv7GhuwdQ88kev+o'xvQjF&fT*>9j未onv;[hBoڅ`'[PndtM9E7"˹/;T#U%/2.ݫt||C7Z;vڧ2K_VwI B{- {r$zk5k4jzN-W.=ʗ _ʗbQ |iK[`rRQKr4o篺# hK+]oTD݊ +cJet fW$rEv+eH"Z7Kv)e}Az@ί} ܵ[gG$A|xIK`ѿv$ Q胓5QhG>&'~.ɇ 4ңK3T*AE!ңo!"Ľ8Y\ԫqǐF,[Ok্cL68 |3ۘ=v]?[8)zIq7Qoos>RͥF.9wxL* o"e]$>SW06u5vuuч7oIiХ)gk$w@ٞ3_qd>z+;gP2Ghr't)K$Ũz )#Q9#1suėt?f9YӷUU~-w"rۄ\Oߘ .X`JN+YxDz'#`qKpM+p".2*ȧ:lLڊ l{:yR@: $G;* ^+ Ww* zPzeŤ'{EOk=]0)+)O>G,:@b\+uȥ*0"0Q1ǽ[Ţ8J_Dx*۱}:0,MxYxk1W"yP0t'u9*t uQ8]E.*B-RcEߥUI=Y"nch85dMX.7G~4*5@hv7d<(~ !Bx}>q:}jzћ7Ugv> I;iaL~wnn [/@"Pu/)!G7&Jmr\F" 7W / Gd#V(Enx+) ْؒ;SEJ|Յ,ڀ:fO/vB8Dj;FB(SߧA9&;3V{Ϙc u^/a#_>nD3ׄ Kmr%ez,Vݳ ] )Khx0X1 Ϗl+pJ%8av ~ͭ<;ڸٴ Ȕ Xy{Ju^%dW;%wC%E)Łn#xm @/nsLhW  _4ł(`,NuŷvR{P+kFEǍ#Da{v;;)˦DM4fw싺XLХBL(a@Y8PBɩb\p웪Ϩb''C$kAlF]jA~ dZ[^f9ׂ؊vʖ`-3*11CR+ qdI%!SJ(!q%g{RKu6H+Tt 욭n*huU%܄!l0w؈b? ONAz1grs> .k/l8Kxsy'yz``LjЧy*#}#mIVAwxnuv_v<U&NMR+r'ѯ+h1Y5YW z3*\حU IkuquMbiP1 ]^Ff9d-KAa9$Oo'GC,}^c/ר{ݜsY=WF]g\ZVjXERs88ܫ w=Q+t4# wU僡&Dj"i=9ߞһHEP~)]B wQlU^G٤([u6i6:~l7K0{j$^̲}$erЧJgzq7Bz_?b~oNc1"@[bEu]C RM B(hGsQ6J/-FMHɵ\ܻeI·GJ>.X*퓮rA6& VZ {UhҺI C$@BZPz5+) *t2'Qp!MXz$g/鯥w/G鼽/tD.6uFF>OP@`w‹Wv]]Xs)9;Z鋿;  KF-==;b~<:RK ktvWwh_Qkv`"D8"10/.yG^r—=* ,W*sW-5192U})+;i@b5{Z7=#YXc9TfRpe'|ETA#5AB%_ObauwK ;+@p#v.98%T<§g^T/Z{W THYR-) 07#t/Dg*ezB8f' ;2f/wbCS{#+ƚ]Ky 3[qߝMnPk)mm׶H5ޯ/1woN=cxgO#\zѾ41ZQz3fo/W_@ico97]2.z1g5u}.NVA5 (1uTtmVuE{]PW%[[M5ͣmHB.<2ש' >+;th1ߎCwN&(!T0ln)9LP4 vJ:٨+J :CMT(_]W\ \߇U[&%3c" `M ]p%҄2*j.PkzNv'Z0Xh@Aÿyt2EOn rln1ߌLEڷT [8YWCp [!Ժ<'Zavu_ |oggˀ9|L+Δ9xvzԗ>C( De~՟(3Qmrr^jO3^=~1ߌ  1}.5%'*f#E#}&AYɽC{z7|2U%HhDW,mn9Lι=CazmA֖(>X%#;SPsA7ժW>%g\Gszqqx\͘oG򨘏hQ:D j|D=2»J 'X0G8jgn2dnʐy.Us.z@0 vra)U+$ K9 ֑ςnJEvFv_9<d"OgLy,hu<#/09KAvs1btʖ\T2CM7!eoCۡハ{C_v;7u܅v< mU0v-8y,X$ 6l= D#|[0s wJ$<huBp2O$3v[>{*^v֎wrM冥1NN6 7]ŶvK\8{{(:@˹gOh:yXwϚrXx4%9>*_4V/owչZ)242w*(^"l̦̔| j"=o%0"DΈ~a)Գ2t23(}ޖܡٝ_˝D?_T}QZZێxVw(o&k*dSPee-` Nim"G q?b,ʊ/3TK\4` e{>*vB6<``_ dK5o>9ZTu@:6-;Q6z桃sUIm{)qA/4.$~jFnNK:U3FQBt0#U+L΋j.&HN4V܅[SjLTj}LYbu=}d)zhATϣl}/st%srw +NzvP5#egw׃/X]'ި;o~{nZ htA٤@֖fQ\Hv{VFKs(>\oNUSZ= }]*}HK%w)*vqtjm()Eo$drɎ b~#Y TfS-&95t=ڦAǰ+jAGR(wY`\R*U&˿ӽaAL'/oϿr|FKmNiW/_3 ,T0,ə¸.p^!5g:!̻ovúcK. b + &*s}k9A8Us!ZKS!bJZYN<{=v{pzߴra.5g^|2sU\M]ެ+$@6b %4J+AEz6eR%ԢJںMJrR Q-kVѯ6ͩS0\J#Yt?m7jKVZ+1cK0Hn_ǎ+6mɮ)EI}@y5'/Kɓ6:xtU⣵*n>PBsn2wOk#iU[yF06$94ǐ&ĝ?8k q~ fR[{!xguGt=u-y[L壏R2 򦽿t~AU-o󿟼8X.f #/ԽXq?~?}Ӵr䛓9kw֌#/If) LA^y>5&Y7[DzH‰Cg89|zRS!{Z}(Y93y7^~DUd;p?tµjuÚ꓊0d0VsFa*'p[D)S8]1Υ^ {JMb[ҽ>ѧ'[yvzP9b4ޜFZLݴ Jjɡz`ʕ2Bh昃 5+{ kQ|շޠR=RpҲC ȋ7G<]\R ɀ(kcץ.&d#0Tݖ\PBy"(LFŰсj0]'}=x?Lޞ-4R|fM, *\ ͨ(U <ܻ6 BK2 5龇:"C[{ T;֘1j^9]h9+wDsDtw*LfRT;6+\;o @%Z 6)nVjS zQM7,TgN"'OOy Q⟻3]t.X`0ZsĸbAr`3Rl=CP ޵q$Bis#KqX< 5-)|b&EjH3&ŎѰ_](fy].Xf0=Ne*RUDaqrz@a?=bgћT@/pw Z >ڳ\:IWM>yH:<9~_;eFui4|cwC-?=4d|?:;˶&2NelQER;,M2b`ΩeWe"}kۍwgG8G&fIb4s=l.PE 7ߐfHηifnoV̵X;Ě=ĚӰZZy(*O{VqfLmP}d.]b-u`+!;nHd= YD#AaEHh'3=#P̱zZ3BҫiN2O9e>7˵\!P̹o6 òQl lk00qlhkKm! *y3ߦaҚ׸}ھ ~# ޹{aXc]<p^yc|o=q%!+] L3@[Sya=^m2W7Cf^Otlx-(ҊuI@\_qSA"gG5DY;-w939&^ûnK"[>iO",y1IIdƳrÂ}.(Ns4MG) e~qR5dhU{慈0ʈw`r Ρ[=s Fw vh9kp{Lzm2Y|ۢMB6ц_R%ɖqXG2)sJ`9ٱҚ׎g̶`g%2 O^F? ٶ-ZYms@!^\[GS_ZhY"JLBs-뀡K +\ڮy؞:{k_HY{M*r̋Q#.&u戭%}d;S^ c$6k@7݇v`-3SP }&V3g߼)8Dyt[SVkfsk 4\p9/!׻ҍzھPֱ/lǣڄ.&$u6 ♗#tXMmhPUo{&;7DBlԞ_9ӰJmș.y!u7Y"V+vMoiZPv49m̞:nUMJp̋QMF€qݨG0@4d4yg]FzC@o1YM"6 5uo[P쭒1RN=Ij<#ٱ#0 c y 1Izv>h%F+절}PE{Ԝ{1moN޴iM4<GG.vTx [O\RDvsD2 UPSV,kh~l%z(eٱr^Qcb'FD7>AU+?x>ވ|HЛwwa,m}n1Pl F@̪( #țG"$|z+eU .-S>ʆ @euF&Lͭ3)woyz̲yU3Fێ$& 9KPBX/|uaD%CH1cm S Qڂ1T&獵X+bU8nh[4-$-nE;g|ﯴl_GoӛmAݎ#ab/̠¥0 yyx]詸rei6)8胧WM:gDH\lL.tm˝+ѩN]~C(:u'ԑ&pNN֮CcD.:uThZR)/҄_~o]ʏpWN?㿿_jyQE>+FgFUV:<jWO^|b c<e YEPE^febN.*R&qVԀO)OցeӶ Ee"˟ܿY!֦myBt˚ f9-L"h9%)(Ue4.MɦXHX)TH 9vA.trjۓ73 }棻!Aᨒ8ƗKPq~ nwjV4M"!}b NDLJK{!_;$gy^mPEFy(sVd4蕳8Y;ˢ !2JY5v}$^98zuPfNvt[`Qw7ʞ FiΓC+PN9ٶQأ0jhNBr YF!#>do*Giы$sNۨ<>A O<06va ;ł5GBQg(ѴkoW4Nܴ ZAs2HGNkyht4iA1hX#2h{uhjvPg@Um4&8s\^D0#DۿIrCma㳲YvL 6̺γbP3[-ˆxkw$4cˢ|!#cVܥ\4L EZ?k}O46?}ֱ vaRqNFh9BU쾬><荘Xȸv'*U|K,MXrRdx0X~zWva\?IbL Pa'ip)fݧ>57F4' tgQyoHG)OT9zy$5'` P$kIJϲ>оK pWuu8,Bf[8,s*4L~T;1م[TEYN2KX%sjUYJMݟsj*b;?#`NWv-TQκ:478Vv6vjZKrr+Qmlu.΅4s:F2N3AFsJÅ𺵃J#1Ѯ[-B S'차SPZhRshNݹ9ur|&iv>429vVv=msN֢iMVǡDHL;acXQs T1ʏL([ּƗQz|/\x F7}ӝ9%y_0 d$jd1e2é$ko_n7|EYG}r:18[>IVnt85%&i>?B.dYtcTwv"]Z$h=žвc"Mv݋EE.ba&Ǡ|Fmx` %AD0-hGXGc xj pT0BQ@ z!>񀷰}1F;ўxC6:ƭ;c;~י!8obԗ8)YCe|J6c973S7X"FhZ>nAHY;Ak>D+Aec^b=K/HNS%{MI5NI78&T{q%l6!@͋g:(u}l (YS0nݯMmCUcq1΋䆼aJ0av( 8 %`ieIq}8)~M0nV } ԉw]cljcP5Zқ(wy:geIw;~QځcZf VL>@h{-#^"@-EbA6]gJ엗ѴZ p@IqR6?F-d(Ѽa?Dx]ؤ~$iҖƙ"\ΞmjHјDq 8(mZ,78g_UwfynȄ2 R $8 Є +gopL;#\SHbr}](gŲPeeXҀ"X @v-:0qCk#?NxʬnA07Rlͤ]I2E@ݧ4I+6V`ARAǦ\K@X]uBQp#* t'9i$g *ܫTEF:'PQVX/iAu@us\-Ӡ>wו^7cݗH~$GQo;,r,]2+W=WB3+M|0˼zsjMcy4]a*T"xY4Z{ vhS6v[9`dz F Z1_Y 4b6nѮ2h N’ɂch AXISDd--8넚L%iZpiZo$))Vx=*;D<:8RG[rF鮅eV+1}Pe_xKT)QVD6ov:^lt\p+EYrQdFW`"S0R@e{A}J4lfHJ4{ [KR"^5?  0\lgR1o_H~Ƭqx*ÄAKV^:q(Gi:NiIRu2h  W։>e,Nw<}$cgw%!7LB`$$@4}bW``x>oв,eAK/$?*FqW;C3*Pic|ũgYONъ&}vD֠x qSnX*e49ckjAz/2D-@ rB@W{zl#,d7՞mI*׉cGcU&`I.U-jʹ:FRߞ}Q<{ߏp<}tB3H` SGEBBqL &ba1[)[8ga܍7r4ɗ >̈WK[n@LѲe0FY<|ޝkIӝ*i3*Rt! lL",M>[BJXMbZx90R湌Hb.\H}5cRҚ#/52m;ݥ\жC<0'MJ!? &'xbYѶ. Rpqdii>(kjY+$X9j %>y?ݥ|+|j&?1t:1,ϿbK'xB(F |YHtvѼ6F,1Lu5(i+Y!"t6fC"(Iȧ\GRr9"K`QlA?{בő* H2 da30jw,]ǃjI֑)]ER_9E>~dU)EJ2zU # (=}4=C:_*ei "'|}{߈|O6b>fV,Yi܄y76w| Y@_#2`-:|Єjk?SףNΛ}*ӺAD:oyyv7?q,j}J7ֆ'/: dl͝7/̙`Cn9#tjdZ_MEIl?Z:yNώD2WT1O r|"נ{*: F嵝_ N@6[JE]|j|bPFC6tqbVm4f׋pP'3X=FJqgM 1yUo{4}}YI};FGnz.h\y=U^o';X- ;,t되k \Dƽ& __ #}AҸdкeyz2Y$olЕ@ޒDhK>{Q؛1&G)$JuS=JYv R1NfǺ$ok[EzGQ oo|?ߧ|Dzz̀y푯F"A&`՛72E>0g`?Й&|3b:{jn]7LݟO̟5~>\ɤ5/ϯ~ȱ!oƴ%}"(#rd֡[F$ zВ_2WaM)h?mbph/5vȮApN!.ٝho7:+Sj3qv09ik dZG*2Zr?6z"DG*S [^v]ߖň % 2qr-Xcg6btFJ0{h{sX-%D-msѓR2#xce1悦誇!! 3I{g8ϊ(f(9ej` RS0hEҘYb(1Lߦ!sVx! sdS1`6B2@).˅1[dfT**AQ`0F.K褃c=rZL1@d㐼bաZb2%ưi׊bKbz:/nXEUWNrE7JhŤxYnY U2ФAf6Rq8U\P dľeT,N*N&NlQB< o '|5g=dfeC[mTEB|@kˬ!V.G[W;G$'N";QQ^nB`' ʤFeXgzvLArzOq 9+侟|, 3}`:T"H~Z!>i"@_؈oWN}Mey0r֚^bS۠ГITHq C}磕,HDQD왌m>C*P V?B*J YU5pVNX| #=ruj QfDw:S5f 5VSW dN(c.8 b P5Dﭏ2 YL;vhj4Mst'ٵ }cE 2X-RW錌#@7vңZo;UFyE皇Ѧ ?ފ~ ـ/48q|kr,jgZ'k>3 mRO=d,qe*D`КƲ'n>ߊqDT2YA ԣT=o[ū_Ђet􇿾:x糷]WF,j~&A;:BxN ï"V-qf@m, 1OM*'> 2bבUm_?>*AVזprM}y+0D*<J+#[Q'CHw7m2УigF :^pY.<~;OXM/ex RntrOR/ZJ4niѬg5nu=䠵L2y :'*>W}Upw/Vv#Mvp욕ZýKϴ?s=6g3C%61lMfRARszOF^& u7]{NHDXFl1nNԮSypwjHN{ݩ"C%Rs{cN67Z/&|]zE쑫8rMȯ>1|ÞBl:u;GӝRS;'N=|k|<9kF1LO'hQd9ohnry{bM.>ȥz4y||&޴1iY7CĸhQʜ\MjlfG +7h맧3]QNttѦHչzUXsCW3{xU9˼m[řǩ/ t4y.PxFO_.aY)IvZ6aFh43ڪ3ԲE'it`leYW%T$>N6%56)1#LIvZ6cF5Yq(ힲjIaa8P= ͨh@i"!&?=&Bc:R蹣qB O>+ҮvdvX!_=*~>ͻ\~8x~v&@Is+8{g7ͯ{V^UOpS.D삥7 m@|q$/^6Y`T:̭G֎䭌cglvX0=˺N@kz -^= ԧ5RlrߧyH߇/ܵO6*?n OږGlnE\&4Qibgz M}U{.|69C@_*vCon}WVϬnjx;?|wSErs\šIC+cTa4$n0G/,:#Bx-SB<M9acl>ZM0?1}V0vv࠘P;zd];RԥAMPi-L\,Põ|F.J{ì¼u=DxI=_Z)j(JVXuoXIgO}u]+dY>xuz/GWO7?~K:+wWajcdpxJXX4rI{ݣZ.:ig1lQ4<2[Үߘ͕W֔d( -R ̼LL YJ]ܔr)kU ٯcŞVĖQljBH%fEN@m£^Y#yȠw;բ{V>5 j=wZ2 *VxZwZ2X5 ,jҜ`ɠi\Nɝފb*mx <)eǠ,BUII.JO:a!v_3D 4' "(NMځ ;ri )I;=ࠁ!YvQ=L}EL{v(ҎOz/%V 4r]v.@*{JҎEi䭆2+Lhtd\|SUB;̲,9B.\iOcKq qӞc1=\_LoP֒Ylw~un ,O/nlּPG?yv_GDӏ#6L#v4b}OeRQ:mx5\x?f{Cz dI~Βp{gLw՟~_wқzKk?')9I4@dnw9x~=}7qſz1{>O{ci@;-€b2 `><}ʳ٧l4樝͞7dž9 4C-Q6W֡/V)Eo=wl({a!\NC'~&=fz!>l z?jpl_m\UՊ .6aK -b+;B LLKα2 ԥ\kUe֓j9Ua!Uk*iAkzo_yNXq倾>i-m0d ?ū f+-QJh H=dHfh]#8`2DeIR]Ζ6ÙNqN!TkiP%;$mA:)v6֮6CN/^e/ùu /3Wj|ڽ2OG޵6r\B%&M"`dX'ر6jH<{,RQВؗܫNns!aMEʹF?pIl2mܚlsMN 7|88IW>U\k>w I"HN2D%ž/DJm:njLi vˬV[+3O?ϫ%WVCEN*˥Q #jǪ{*(rf\W.,}PP9,e,N|@(5&*;5D ٗ,Re2d(NzZc3) > =12dHP'>2t vZSND:vQDym3(o& \r١<]Si9ۅƓWIE9w2.&71Ge\R* HEC/fȹSǏG,S Y? ~S>݁fMiVOҭ}݋Ю=iG>Sd2&'pCt˅\]T5o/ WϧM-א&;,@f//1,0zB?_\ Ȍ w7^|?1?KL-!z 7 +xbky+E5vJ;Ķt%; Ɯ`ba:,M9rq`<fjo(BL* r朋+ 6$e2CVK-22 ̥ s\q 1^0TH)A\gcUQSn|w[5ų+n|M߻͙Ad猇Og)5o1S|c8IOh^Y|~_X8| L'|ۼm~5/$4Ap~c׉_'ns[2C~Rlȃ($f g(h_'}Jb!tvG,=gO~x0/Ù >!0cd.N|iŽdbG8/~;K;aIj}}R%WF+}>3#6fv5s B0`S ilCu<0c?7US2eb¤VYpK5,0 2xƆz'Q^F~roZܶ6Y8FI[5Edx/YZǝ5>Sg@=fܛh\LfI!%+J"!W 0 awLZaNT1qB iJWR`!~y?nA4m݇;}e-r:ӽ.seC!n?{v4@Ns&P&84G"GQTH뉇P11x/JOmaY)*`Pq<쎑9bN7>dZeVR Rܖ12TV1U~Em<.PẗL쎐!Vvlv^2K1BlUQ0E"Jw+(U#b2) R#Me6E({(&A=חCtxPRhBNדz5oU^½S)iv8TSfLަ#MǦ[?AnT)||J`1L`n*?uś6Աy; (#fTi6dn(nMtf27/Wc &,?SQBj D8۳0iMp6MES),Ml X1E#2U>P&u;:w9F?*&ͷ.kYX$$ƸYnn'OI<v=3 tޣg潣xJ 9p DE+J={#Õv˫q]('=Mk1D7b?8j[m?gJ2zU0! 8"! H(XLkd DbB艴$hR7eKQ4Y]oי= p~s&7}w8+ cH~yY!n¼YH 3=p7>?'UpP/t;?V#{|ulGO^)O6ǰ˒?WT|$QҖ \؁9X` 0I )̽'hV!2W:Ha<0΂`6TO"t0[UJQ[9#SsP y$2cz#6Y1y ,M`P$\rEǚ sƨRaDg8)DG@"r Q{j@(^ǻuld;d,"Ayeqϼ7r @qZ `+ "0FCs$Q(b1bD)QF7[)`XT7Lids ߷``P`ULWYR!r 0F`8%:kdFYcN5àd*i^oQ BB{x%5`KC%d,'`v-`y,J~ցLSP<& I(ZX|^H+x~}w0( Bb.`nE|'FSk` & S,TDY1 ib=PgYHC+Qh7J! J_T64~͛LYw\.xmnU;@F~ueguJ~u|u% S.i`VVb@i^j :[!I(k7k7NFL EʝM6:6` zoN2TZܜfj8S}Wl(xw`^ k#^P+G֥!x 92FE1(gjysPƂ>Պ))g_+*|xUW;+@x%c>!!#L[ > ]KL% h2q{Ҥ_cH#I+X S4, )1#"&u&:G!g,AQ9x%y$hhT & MiLpbhtLF"+YVJdp  gD^&xS1kIOKU H+]9O# 0rÌB|쮳w$漼ǾzDI@q)`VQe B_)v5 R7k(u(+k:1xgBAð IOdW:#Gݷ|G Qfsj' WS+N8tO;ylEAv&/L40B@HK ޗ4͆T&$W֕RxK';V J*#Qk03, ˢtnE)40],7LDrWp:TBN@Ͻ00~' K}c8hЗ*\JG݁oRZu^ Ќϵj ISڄ8$! mK`:WcLR*xDBcB ;R+/v*aF3c!㍛b(J)υ'kXzμb΂/)|²@[{ADŒem΃: 5HڥQpN6Lj!;c45djy`p<3Z,V!|$;C!dal0)u>y W?{$_iWk#5l/x!"Ês!%JM=жJˍ-"\5{hHFCrg ɉE{]1dc y$(F##zu>D wd鳷/YViF4%J0(< "p/d2?\t|cF#K7H(QQwxSC(0~~Ex3: x<Λ܊zx2}K;u$qjŦh@4GOO`Q6i.Dl<}nr<`k4nJ37?)2{$7ZihhشwOcG5&g[Q *~3*@'}o^F%mȤw$G /8EWPuŌ)++Wٲ_D }i,,7߼&~Tm?j-=pɵ/9@pDioC~+E)oQ^ RA-%S 릿|]SMFdWϽ9꾜wzMqE0=%AV>Twx2?YakAlل?*C\KnS]v.ǟ"K^lw9堀r<Zz.?ɢ% ,& "VɢPͭ3h2<0AX_48d(4( b9ClafnP#fnPE4sР(CEi䫲5a֬X n~eFEawtTv7C |zPqpN3B,,ҧraEa: 0;^"fGOB>=2f 7HO-26WcH|.pާW'3C*& )2}^/ؾZ|Z>qw<ϏK EQh)um<ӗu `E('lAG( M-TSלȕ%sxpW9V'Sy^0Wwuu{vP{ 2 %e!tڔ¡”ʔ)}EƓVK~P+ܭd;ޕh]^FO~p$%jjͼU@Qci*t3(Jy(Ozм67WWS|a] DU)*4T05J@I(wbPV4?^:+[>mp)f-ȃSQĪNtkWCV;ePJ}5m=QUF‚pYH*eCageq/_+qe]n?knԄ<Јei̗3mPlg[^Y\)zXR a&z>bP1h +-5L2δnkE̼R,.L _w ] 4*}j1F(*; %x?mYjb{Q󬸮BqHGxu}{{mu91tsUMIRΩZƘs{nCGDYI4Sl]\$ ű]Ow(%k:sšȘ>}XAuǷQ_}|09nw0tMUIzo7X-|mK.*ruÌ טp7vh89En; |U`I8J*i_?g΄r!qjvtvOh۬Da<(*CsVW jq|d.,x.֎ّ3p>. N`\+Svn# Ut~a3o=<&Ijlk.ˬ4y]p7N6>!}` q_ۏl.:2Ѝ! a?6;5C؀YMAÎC!䑵qN`CY; _8 wc|a|#j+0CcF xKءZ+:0Yc'e݀3$QZ%~FŨP>aV |_ m0䏓e|>N>L2-Uz+M{0N c*`n?!tl }͠5U~FW.$}`Zd4tvTF̧鉣w ?}ޏp+Ec:dҖBD-VIop~E߇>#WC.Ά%n}07?=5~oUnd >sw}7_؛.* |LK烁o~tөwɢG鷹m >+ \| BFt'3M 'E9DqdeD#ʣ$N,ʗpc lqK(nAZC`d͛^*Yϟ_ǠY/rߠ=i2G^XV4hy3$ł+*o~0x ?1?hh7)%ѽg捛ݴ|TzՁ/Jƣ^ a 62Eq|YŞi?~816j @ݓٚZ[;M0lU:TuwܦN oL80 h!HiA 6HBJ,5H=ҪAj Hi4I"՚O}-M/ݴ3Dƾ7ëUX'|T3݀? tU=i}ϝ{weBЂkfwmp4%גMV5ɶoz?|7< C#y`e7վi{G\ o5| |C<6"Hb$LKm#Qx"ŒZ'*I#H$/!pR`p(('vq <2HQR H`)4*( 8I1?A͗y"%8I/r->4-5#4طYKF>e?O\#"!f;aNȎ& DD:PۄCL̹{P(p$YM\g.:q8'Jhlm?-^CA(R}째&[j_`CiAD25ΥŜpݘ 7w0lgLnK[c3ɢ}+?Mڷ.frɻ}notKU@K%z,:ʄZKWj<5:d@A~*G"s$Y?ZjA> !_:M-ج,%^,&PaXLX!5,0h7"'09t9R 6YXdNu&'dԶ +)|I|^ywm}⃻bCzxI]D(!y2Qk٩6Рo""*L`&"GdR*Ha{K*r0la9&wrٹ`+n`g0V[r`5>u [C"뾶A!VӴэ>M&%60+r6('d. T-k{ҢD.Y4 7q~=޸+^_M.=E밓E+7M+o= Z|Zq2&-_yiVΡnrkւ#s #$s5{e$={EOZa4ݺ Vo4WOP=&J0BLԩ6 )A@m+ ߕoBMiW])X#Hx""<&YiKnj\ VLcQY =X[+=V"h @?_tY0]w.8I klQ X󳔤!8Hf8QסIbNRC%FKPn\c6껫# 5)IW(FG[kֳff=׳%Hf=يJYlYSe?RKo%jYe㎜ H} rpR6?~ਃG C죠#k"K-Atb*IMs)I`$GHJ oG-ȆPRL-ḋؚjHBrM0x]4#h>DQ.S o1lC,B;)x4ڴnŪ3ZP ʈN;X󅻚KDNg݂ -kݎ!!_#lX7IA }Gv8x dnnǐ\DdJ][7B`B1(#:bΗ<݂ -kݎ!!_ܴnKnwTn[{vm݂ -kݎ!!_TꅮlZ7.w Š脾u;TκZֺCBr])QN.%" K<\Eex]V 6x?ǭd77Z )6mkoI 8hlJJ "B7Ǹc#o&!m6Kի̢'#l6.L%&&i&i gxIG{GѭOe=6~śsjp*15!M]L6r&0 FS.RS m(%>_s+bL3mOtQbԶ0)N#0p#A$A?[p:٧ky*qaD}NXk:AxTbE$&"/omV~^8p+>6QD:$26aanI 10fDf'ܴHG\s MpY{p֯^_ 5DpK~V&ZܯGc0[^QLF=MyP (*XԐ ekjA \pqB8rDĀ#b̹6U9̱T 9JL m2SVh3|jrH2MBĀ7 'B>0J0ՄiNPIsY8:<YSa ,Ks JIb,qqcp9I d@vI(ŨpU&#I#X"GZD;F JF0`8v( >U×DJ+˘ tȴX?Ka S,6%]͐&b4X"Ʉ !2H0" Tb@xe SSX@PEE%2/ ER":҆G:rX@/%Knb>8I e a#*}?B\d*QHB[{GBUr((y$O"M.E|<Ev+FMɿkrtt`##%) ] ѣ:iRTk)D͇ fz^h7zKǜJ[q,WvZDrt7G- :ߖ|IbMX@XGDDw\fPmqO,ebgtQ.sY|6&RN}c~g':F,T>3߶t|?|0zgڧ[hI.|6JpyTʎ2io^jN;*l3m{L mmNKR: wd. G Qf#Y#YTYrFc J0qJ` tsC#fc6έ6{z}맟G>8 ~SNQ(ӤB;L DlWPk8n'n'xOq]jcW>.hbڕ6TzsGҿ uaUzVEXJY&x"{Jj!_d9,_1N"JBHvs Q`i7`ˋYƋ^](T&K'@i׀vkr>?C˯W6} Sg6mQ{eżIE"^ьѱEJGd4u,'#3=,IiX֙N<&~~ ;!qTH:wߟ&RV/J"@M[c]lI_U46߉]b Z)pޡݐ @O5Mb0}+52v =M20ɢ_ɭA%V]DsuOn6c1q-X6-e}C=B+`/z?*(̵vs3H!aMk'fk&^b"9AS0G{[XZg`JzuY "(ǒ/ X,c!6 % shGgB0.0b?}.򔧄3P6z9qZy RTVƟ*m&n2MS -ib)|꣯Yc>3+P)5a O|ZÃ6WJT}.Ya2ٻƍ,W?M0MFh IN^6WYD)dw'>U.DIEYN6IU:ugYAXbBo$SYj̡x\⟖^Z`9-X&040k̲]8Mo5D"EIEmewK>%UBv (L" yߡZfR5C!CoHl]wmpd6RۥGE['PCTqx?'k:m-7E k}R0t9nED7Q ^eǣ*$ q`ߨuT|#sD b1 &iۓy^Tnw[g^b2`}c7]i4Z= z0ϬpB)q`r7+{ѹ\J^KEp!r (|p<IO$lX[0 DT";  gpڧM T0"Dν1_'-A2@A̢ŷy`H&Yc|XۇP)d"".ʕD-a^]JE36Gjy8q A% <+#juh6½+r}RM _1YzmsK!iE "Ġ{D:}]{sXZn~9{4gLT[+$2<%{d$@gsmB]f lY. lR/:|aj^@9DT!fZa!+>կW-O4#}}P3(/(&` k [sI!9;<#7GLVVߌ;DJ D %WYc#x('a؟O݊6*({!X:_\ LP4s<_$@!dy 0ja&Rk^`8r^7rnDpXcmg(YuzISt%Õ:$UD@imy0.P]lͶıJ~ߍ3:l0OmmxouɎh9Qd:K+$Ruڢ;2б+3a7tIԅ;SI@0n*6,*@SR힏Y4}ڀ$ B:Bـ EX8 8'ͪr,%sWqBEH8B쟧>! ! $ȥ*!]}L5F_Sd YmW", 痦 giɄIXMG@BY֞٣U ꑅ6dxL:J݃J$["8At΁uǝ-0@%wMRQ7K,{/{x/1VՃK ~ 3db/b~U8*HϣMM)DlSFlFa%&v>Jr)BQc),맷Mբ!h^ t %a]=`6#+ЇW'%IqxkLwW 瀪 /__yx/?eY>r)wW.=nol?fŸWfzcc]3Sڛay AROF\y9wRݷPjHjq?_?\ylXU˖1hzXaMmm6t,.8@<]ySsJ_r6Tj3z ,ݡ$2E]N] IybSez5\ *׊2 (Yf^p H6\pƅJQ]1N9{N2!Da iK#\ $,0 ѮQ[9jw㾵/ڪE B%9r27 h32G+p-׎]<eA Ar+1?V$8DK;„( CLi r剬VR@`M%@ \(b7"~ ;B9Cb SNSD8 j8vT:arV27? @ܽշ*A˯vPt4}94}3! ": Z -Up X:uh]y2+0X#\8$jHbH\vu2*f@\16XP(Z4Gtu(.iI*= 7/=[ТEeypi@١I~ . };(͡! V>`(l^Zt"r |9\\tp ~ͺrjut2$[4Ϳ)8[[ CTmEfhiյ8 AkoR@Ph1u8kMk3|Y@g[` d(Fa׻. Tipx$@eR];h<irmƯyn+:1!x O=v8no}7Og6| TB6LcUذ%2F ]٢W%!""! v;-V`bJNH&r:6x8"4Bqz_~=Y?ωl᥋ԃRjY5BJnlūdTxK0蚽^~eԆ=>1&PŋjYecff9+˒Wo%.GEIc"H!M-ƒ?gهIK6U&iRR4Zxkbe_~ƿ~k7 sb;+<ɭKcN:Q/g78^9Q2L.dR/If^"u6'Mf'm+yD2Xz6 ʹ% B0Q !rU3Scgy_z0vk)!aCeqq0k)';Q t.cOq+oN-NKh+䉯Ha%I ,A.UIGr1MB%8R&HD8 &#IC42,,MԿY%iH̦ӕھ*S<2{!9C29aK&<1 L"A&>HBd1Ro#p X")1!1PҔ"IbEC@ U P4v(U'dƔs,jX@w$cgK>D)q ?c, KHX4T|q8$0N~'<($suGX 〇D.莆Vn딆I.4 @!CNs–R@`FDcB_A/~@"Sr ^h( 1"QԇQC*)i* )@UC 0 0HvyDRb!$ Id@X H0 }P>!HF@FBQ F,)QĐBLP2Gi n0)(DuF p\Ll-;{m8&0R&k}֓$rAT,.^rjh2w6ǭ}^Y]l=x "!}^ˑ(bu(\*#HQaܯQqr TmH4CrJt~. !Nj/޸4E%VxɖI޵q#ۿ"̧A..nqlIc bK3jfbLV7OUXEVR3Xd=x,MY]Eq'<lum*'rvs g1dGL+͕y8;X{A**dH4F>`gͿK vvI{CFvgqCN9w<ޝDYg q5S!,Z~ x=ՁQ{ <=@-`QwNzvC[)՝ڊ;&j+aj^ Y͝s-S}JuoT?|{r4Ox#,M˷εzz}LY%N-<9=4m6+{+xJiT#"ۃ6<>:@!=<2QݹOj.UwGIֳ"~7L3;m{T.O/})y<늘.Od:#AvqPr'D;D砰sY0H1<*[).{Q34BYv06ՔOsUtiU2:lUJ! A Z6o\8V=*[M.OlU 7PIتx(jtYעp0cnVj0~npF]<2 $kN^Ej+ajV5~^{y}^s-FB4zaA|Tyu^0zШX}i>)"C[ם_ ᴿd צ}[BhtVB2$U0a- GcUi0k{Foc4]U g2f+sƃ;HN_5V@OR%-ٻf*ըϖUK_~__ ט1Z]z3]q71.FWw/:ݒz&,䅛hMUGލ %N[ gԊ(}cTѱ[M4ɦ.n fp^[ ,V*Bnqg;WJ_ZM(bDiB2)\nA d B 4)60\&)҂xN+C->G!1M"fsu. DU#7MĮ[ bgԉz\4UmGn酊[M4Ŧ4"_IcX tRQǻN+m{dޭ y&bSGMa>*I}Fgλ%nMX 7$z7.uWOf)9o b >u|rVvv/E&,䅛hMyi #EN-BG)BDmfKtEDoB #L wfs֮pG,ٿg.7aCaq.I(#TvbDSi(=)Hv<`,Y@)J(^:#A9qTq,QHC+bǴV5MBʳ&S LfN|!O!PZGXZL0z&$Pq.q3)Dw!|p(2| v.᙭gN#Ф1mq* ՖmqtO"(a7&">=%!\2.YkQG S+Si֋cxW\¨17`Taѭխy7=K co$܌fvf%l`&SO}ہQQ1 NY->^}⁈#nFODFvEXU>VxFPN> K{F)E탆s,PToқ|~^eܱܞ>I$2\n]ίz萹/S57^.7.{fc;UŸl|n;^#"bL!%SlwD#vW[*E"+(M6UZ.RPO9?*Qja?>A* ĤW5A 4je}fA@`-&qo;F !ZTgL1&,zAܖ'BX%9xU ì mn$EM}̀0SUOz)=4M+d/ɐzߍd4n_Q 894oP h>Maч磕|ë# h}Xz@LLm1y7L‹ZZeSnٟ_{eڶd~X΃hVzXXff:BA5qegr9cև^~p>WFͺS9C|Z;㹇|ъTQ՝"wO,V WAv ?;`sd2)vR?A6էqK3G֔45&[P}R9~ےlRco܍w_Єt|=|biKh%Z0*{ͨkx׊dªٟH)V¬ے^0D1~WeOԖPcWf0eVYя>[l.(.B_׋5}u}#(%Yq3bEYr3~ e"ӫ!ҏ$C_0"{J5]pKγB21ML!$3NӰoBJљ4qAknCѻ%3}bsz(#iph(i#5u[Vh%kQ2Ed=Jܧrľ[ SlrdZ;dsNJioe"IpҤP" cDH9U0^P%UBb#- ˅69,8pPaSE*'R)E[ SYJS-,@9ۛ\gʚ7_aeT=x[ñtGyGAerw}/an 2!?}yΦ*j ~@~}̵. 6@ ӛ30afr{Ό3Bw&|R;%??>vp%kXبuu~犍{)8ߖ'(OI8Tl?VD(Q?2r5mN\Ґqr21pVEŨ s-!XEL,8P!7*$M/#,]Fro=Ie2]oKʁ4"Y">(a0*acDP$)9~>l9@l`3?Nƣ0\2;$QN oG Nx 81d𯴢spANB[)#kZۦՋoѳ]Km<~{'X0am=|RTlkZK1`M8o*kZJ5ࢪMV൨ BM9wEx1Qh4h8_pF} 2P="(+4 `GWB@+VS "!ܢ5AQB`r"pY_ r8Aۆ5"KN|3lEf]Qp C#)uq,hD+F!B8E1n'LSEZm͗㧦Zr]NL< pJ T42<&x()ʤD3B$bK :!iqQe9| d_R`%BuC3hv \cFbB ^q;%')87|Цiqq^ps|ӉW21bW"qU X*~| f9W$j]> $(G h@&(KqS$:p ˙rؔD8Q2Fs(%+&0`ˊcʹXכYUTdi6}^}?A6aeZG8òydI˜Q,LLKqPj`q`fP &d`ŌI&1Q0"c3hРĔPc(z]ҨH9Dr_~r?ws)(3k]",%Jwjb$P Mƌ"M) * iø{< X%?pv=?=l} ><8Z?K}(9q{|ѻ16ʿ׿gx6YӢsYk{tܗhPik -kKBOJNm,hӻZ3mq{r׶7P{ʙe?~IkXv?F-ILUK^-ed{|_]_ZO~ o 6دo:|Lt(kQnѪك(n<:xYNW= >]/7._-X6nqRwձ>]bRwkS'zZzs-/QZl曩?} dR&qމ5Go7d=d )B&\%Ah" ird`{S̺ux^HQ?Ɗa?*Tip06״+j-Btyuz_WԀ-^T`̠}ϠAXh]q-MT*s"N]l2[weN(PT{X'{RU}6mtI $e[]1ex`_e{HV:xطz3{~ Fr^ Mw!`Cgoۑ` SՃ!XTQỐDh fB!r Ė:o/zsFEtt~F )n)t}4B}/(fQFn3\5 LB'3q^5 .Ūmźwl0SJbx,+TejQ|cwy\KRȧaKV_8ËP*\[ҤOU۰l3'99J_)2U! I) C+B86T3%6#"&qJ i#RTR&q=kE dziNeXl\xa_3e1p<fWgc:Z61ߧo%fyl?9PIAZ]n:gyc3]ڗ'S3~:|O-^msnccO)@63M|Y}Y3Xu­oғ̢݄eޅГ|"ZK 7[흮 ଅV}Vfb&;`zQfR@%-$5Է7h he6Z/=Sz,sDT!u\hc{X+=ZQH}`oB ] =pܽ8lhrf}j[bHQQ)ڟ׵}7 =|-(\e vb$ċ1V?Ɨۙ -m.y$ xvu3.p/7h- ;%|>Gg^41?Ǘzk]vl< a"a "E0bX -9$aLP(t"8OR3C%rtc/(G{-1;hy`o^ZQl_SڃoIvo|?`Be!" RN?GDžyv% w]v;GXD8C.GM"h\6U|yquY.wX|~Cn_>?+<2o#IT %@8(FgOoHG{L` :Oq!t-~g.\3]r#A?6d7ֵ90 g#CxQcy5HeYۭGvs܉>gf =|1|ch< GF±.V=_*Oq·u=ӎJ2*uWrJ9$4K$Ǚ]}3#f/\B~}Ӎ,)⢶.']FEաP qR0˕Bnbk}{%UI7S7 3UQ @E7qn`5H0Qj*GRk2~z-ܺ3Wp,L!w0݁7YS_L] fɸ}0'Aҟћw?0 w |TLgQ.)dI{r* PGaA mTő[AVYFS>2UjSۭDG*ќvp/Ycy-{NAX-ʳ`)yٲͳ v ǯ;^;60$޷ 2m[k Q^rMʩ< TG S(cov|)).%ݵNșwA_$ѳecu.0zjn:7J==ם,,W$$kdIoa{ژ9¹^՝kν[?ퟆ^ezJux{Lad9SD NQ֣Jy0zRu8\J>Iܤ2`ϸ :DhZ4,Eeu_nvp/8`%Dm3OA-f)DZ.L]<NTJ{GY7Z~OAZ˺U Q:4svw/Vֹ鲊LFE _m>oX|가)x=nc9 NK>cD[cז:ֈK ў%3oWg{ўZKzz%27%3w2 I6?:iP?27WsRJvJcgg zcG*ZK=+S/Y.TjTm>Gj{SRC1lf ":0CTES4KmZ.Cisnp9u:>nꆥp: ~: 8':)y|r7<0=۱'P%Pnךˤ;UJpY*`̠rD3!ܽ g>RI[b.k?4EBre!9%gJ}Ε [Gq4ZdphhdzEXwfΦe+Q3uKb3]͞ɷJHj[{9J<$AYu¯7 x ׹mC_#%ҾmѮ ݃#:h<7?(! qx}]&{աIS.E1ʘ4*r"iͤh/\D+˔ REvƗ_Z2C*f A3(BeOʖ QThڟC6mVk2.O:l )}AF?{FA_fg`p{0-$;,ْ+bj 8IVŪb(0 JsBKβjg X' v  R8XyM QF{f%2| ңVZ3KC{]`Dgt F+4Ň?s؅a<}.=~/<0͛x8uaطL2'?>`2-$0c-= tӢc;G n. %lvGLYR>y_I X.`RyMsCg4]] Q4Fَa&q,þO%J&Á>_t}ͪ}W">cħ%8~5閌`*wA(K*+l!X:#C bIh5B Xtq"_q<| m|ѭ.ACF`$qIiv8tNlH/S^(4>8Oa|'Ԍ( ?Z!<Ps78yjy*=/q@#͊EQt`@W+ji^NW/_}c9`H-k[ 8c[f( tAoYB )] J<ڞLI{D=SX 0Z^t:1㐷)^XS S;X)>dk۩HI [wNCjBO6PH;U =.O%@-SFİ=(O=AD*k.@h,5.\U(̣^*(sU_Qn+\[((8hF9{1nz̒x*M &Wn7}6IxU`[׎0iW;;f !S3%8E?Zp<}КA%xJ t20jSrqxr.̐zQXSc}q!Zӝn]8<@X9QG~t9moa&[|gwM; n[De$EFOon/~A8@~ﳚm>qMF""O">Asg0Xp\xEݻ}:VM>Y;!D6w-Giϫ`B6QX{so䔼yk0}a_0xyka:z?yptaGQIq3v-۬$IV_״!X6O5:Jr$ Ne:PA "U@f>É@HrԿͣtS}eәHDꚌFZaMUA*Ri~KFh'_Mj"dTP\&k9B(g#i` N]5ࢤۭB x5IAThWG Gq(Uh%@Q4KLQ@ͰJ^,#;;(8 VElа\MEcVYV~ >oi. l|iN`o- 1>)ÀP ahFG\da 4Fj}Cִ`CP44uN!xj9pϑ$9 gLr`A WN<]"%DgN*3MɈ<ˌ OcX{vOAR+cu;INAQ=o[4L(dIl1l) P""6iNZ0l0 ?v _eev]7oG[LMi-?p]lRѤE߇u't[\^ %d@|PҫJw&*mޓJEpP8d'cmS׋0PI9j(g#sJ?⁡7KunXG*ΦA㯙\" >b`*b`X^4`&Q'_p@B,6)T0U (;H͇7ۛee^jGozS:=H0yf3,7qN?f7tDigIJZK Xc.GMցsAL{ U\pP{Tj^b[!z1BJAZ=Pƭ(ͬH>4*10`8TΔc"Úx&B؀4wWJB|-I;<LH$K r.}+M@Ν} va5˫AY?O+t D0$p3MzIj#Z2pFCR  9&8ALEc:F` Ӗa#D„ʁr,YXWKP&c%+9/=zdyaQF8~y|JFSkwm;oZ^߂Y@_r33!̮ROU9釋I2|I8̼BǏÒRc],im2UH zÃOf|T=FH,tpT:eۤ Lmӎփ0hKQ)451iF#!XmH%Ȏks_giElcEitEa`L5]\`)fd}^I/SwS4[Jƀni|i'__`6)./%h螈(&tL H"^Y7 TJ8x3 KQߕ^St4 iLɸ"L%GK{E%Hl4DÈ2fY^e W$; z\-O A8K "H;jflU0RXeA܀ޙ ƨi#ATW[b2FK(3ѧt"2Z"y/0YxI`0$%fztt4u;y9 Zž80:؆nIȻ{C'X7Glg)HCNVzqr%B0.3C'cO`l;CD# y<I^gv%B N:Op0Eski*E NMJ'jSC )Uڒ7:oğ8% ngl xݶLES.'WN"׮4s*GD(҇k`Avez8e>ޠew,cuX*YͿSzHjjBíσ|;rXo;S,fE$7(3؂.0ب>gp%%k*`nwiوf9v<̟&>x7_wEڭa\a S`Q)a0fޱeo>&y2onʿ+!( hW~%g{Zu;tA*3t]8EU>JcY̕ ߌGNѻif*ߎ<@VFfkM]'PEߞ֊UV@ʓTN^%+ϩjӱ^ Aev`z)Xz+r luI*ֵb12w.փww7N0hy*ѷF[kU玦rGS#*RF&92VJteԻhzEYN=.y> 0 +c5^WzE)䤛Tw#۰ukpNa,c<+G(d9YIWR@'JK((^&J`̺s'q@Hk~?h!Y 4f_J_gn˶& WͦVklOÙ1MғcA.#Ǜ|^hۂYEAFQ |z'w :0hk(m;=PwSF#M rA=p5'&PsL-dm\mAz FWjԑ(d7ѡ$/R|ݩG#:Vی׍nCg}mJ]#^p Mt t}`y=g7' )1jq RiO/ *SD)*hA"]qG 1r o~x1SxǃTVQI_XF%Ysʪ])&sAKC&*2c8q3|&&̯^|K|ZK) @qʿ9K)V-NotX.QL7on@&VOjˣ-9uHlPJ5(Q?:1uJ-=M-SClL@ߎ>OnnMaufJg-YTiE>j)3503KGrag2KlB`>hmJٯmo"s4%JÊP+hþFPx0$I'6 _O'FxYaV1qsLx"jGC59)i j=]uA0QTz".\'<hF6J $ ~>nn[Ekwyʱlo\[y2ҟ@\ R)5ViVNRr)L M0IشDTyf m$a?3x }Y|߰ԶTĂ4Wiv+<6j.=8n9.k;9;&]2*›ӑB}KM7G %k7_& sONpfSN pKW\ɱɜ5I-/S9J zЎĞq=D[$&5y7.fuT41bg T%-auRH4AjFwS/ӔΕsێh&tDў(JP;yK!L~(xeV2̬ƹ2ntr(F_I:?9`ir(dOD}^霹4kptmcH;UdTᵞn>TBl֠QZHG>^_\~2K2CguE;'|^@_눀Eg"irH0|/zN-+ L7,var/home/core/zuul-output/logs/kubelet.log0000644000000000000000001303351415134372740017703 0ustar rootrootJan 22 09:52:07 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 09:52:08 crc kubenswrapper[5101]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 09:52:08 crc kubenswrapper[5101]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 09:52:08 crc kubenswrapper[5101]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 09:52:08 crc kubenswrapper[5101]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 09:52:08 crc kubenswrapper[5101]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 09:52:08 crc kubenswrapper[5101]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.124963 5101 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132346 5101 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132396 5101 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132404 5101 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132440 5101 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132450 5101 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132456 5101 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132465 5101 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132472 5101 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132478 5101 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132484 5101 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132490 5101 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132497 5101 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132503 5101 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132509 5101 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132515 5101 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132521 5101 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132531 5101 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132543 5101 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132549 5101 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132556 5101 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132563 5101 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132569 5101 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132575 5101 feature_gate.go:328] unrecognized feature gate: Example Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132582 5101 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132587 5101 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132594 5101 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132600 5101 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132606 5101 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132612 5101 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132618 5101 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132624 5101 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132631 5101 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132649 5101 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132657 5101 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132664 5101 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132670 5101 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132676 5101 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132682 5101 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132689 5101 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132695 5101 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132704 5101 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132711 5101 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132718 5101 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132725 5101 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132731 5101 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132738 5101 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132744 5101 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132751 5101 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132759 5101 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132766 5101 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132772 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132778 5101 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132784 5101 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132790 5101 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132797 5101 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132803 5101 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132809 5101 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132816 5101 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132823 5101 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132830 5101 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132836 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132842 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132853 5101 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132861 5101 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132869 5101 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132876 5101 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132882 5101 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132888 5101 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132894 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132901 5101 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132907 5101 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132913 5101 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132920 5101 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132926 5101 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132932 5101 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132939 5101 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132945 5101 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132951 5101 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132960 5101 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132966 5101 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132973 5101 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132979 5101 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132985 5101 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132991 5101 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.132998 5101 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133004 5101 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133914 5101 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133928 5101 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133935 5101 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133941 5101 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133949 5101 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133955 5101 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133962 5101 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133968 5101 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133974 5101 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133980 5101 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133988 5101 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.133994 5101 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134001 5101 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134007 5101 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134013 5101 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134019 5101 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134026 5101 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134032 5101 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134040 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134046 5101 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134052 5101 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134058 5101 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134065 5101 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134071 5101 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134085 5101 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134094 5101 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134101 5101 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134108 5101 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134115 5101 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134122 5101 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134129 5101 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134136 5101 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134142 5101 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134148 5101 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134154 5101 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134160 5101 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134168 5101 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134173 5101 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134180 5101 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134186 5101 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134192 5101 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134199 5101 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134208 5101 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134214 5101 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134219 5101 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134226 5101 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134232 5101 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134239 5101 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134245 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134251 5101 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134258 5101 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134264 5101 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134271 5101 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134277 5101 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134283 5101 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134290 5101 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134296 5101 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134304 5101 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134311 5101 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134318 5101 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134324 5101 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134330 5101 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134337 5101 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134344 5101 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134351 5101 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134357 5101 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134363 5101 feature_gate.go:328] unrecognized feature gate: Example Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134370 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134376 5101 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134383 5101 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134389 5101 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134395 5101 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134403 5101 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134410 5101 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134450 5101 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134457 5101 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134463 5101 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134469 5101 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134475 5101 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134482 5101 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134488 5101 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134497 5101 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134504 5101 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134511 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134517 5101 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.134524 5101 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134743 5101 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134765 5101 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134779 5101 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134790 5101 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134802 5101 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134810 5101 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134819 5101 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134830 5101 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134838 5101 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134846 5101 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134854 5101 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134862 5101 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134869 5101 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134877 5101 flags.go:64] FLAG: --cgroup-root="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134884 5101 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134891 5101 flags.go:64] FLAG: --client-ca-file="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134898 5101 flags.go:64] FLAG: --cloud-config="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134905 5101 flags.go:64] FLAG: --cloud-provider="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134913 5101 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134925 5101 flags.go:64] FLAG: --cluster-domain="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134932 5101 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134939 5101 flags.go:64] FLAG: --config-dir="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134946 5101 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134954 5101 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134963 5101 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134970 5101 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134977 5101 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134985 5101 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.134994 5101 flags.go:64] FLAG: --contention-profiling="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135001 5101 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135008 5101 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135016 5101 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135023 5101 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135034 5101 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135041 5101 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135048 5101 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135057 5101 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135066 5101 flags.go:64] FLAG: --enable-server="true" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135073 5101 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135083 5101 flags.go:64] FLAG: --event-burst="100" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135091 5101 flags.go:64] FLAG: --event-qps="50" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135098 5101 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135106 5101 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135113 5101 flags.go:64] FLAG: --eviction-hard="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135124 5101 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135131 5101 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135138 5101 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135146 5101 flags.go:64] FLAG: --eviction-soft="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135154 5101 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135161 5101 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135170 5101 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135177 5101 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135184 5101 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135191 5101 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135199 5101 flags.go:64] FLAG: --feature-gates="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135209 5101 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135217 5101 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135224 5101 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135232 5101 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135240 5101 flags.go:64] FLAG: --healthz-port="10248" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135279 5101 flags.go:64] FLAG: --help="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135287 5101 flags.go:64] FLAG: --hostname-override="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135295 5101 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135302 5101 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135310 5101 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135317 5101 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135324 5101 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135332 5101 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135340 5101 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135347 5101 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135368 5101 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135376 5101 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135384 5101 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135392 5101 flags.go:64] FLAG: --kube-reserved="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135399 5101 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135406 5101 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135414 5101 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135450 5101 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135457 5101 flags.go:64] FLAG: --lock-file="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135464 5101 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135472 5101 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135480 5101 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135505 5101 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135512 5101 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135519 5101 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135527 5101 flags.go:64] FLAG: --logging-format="text" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135534 5101 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135543 5101 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135549 5101 flags.go:64] FLAG: --manifest-url="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135556 5101 flags.go:64] FLAG: --manifest-url-header="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135567 5101 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135575 5101 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135607 5101 flags.go:64] FLAG: --max-pods="110" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135615 5101 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135622 5101 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135628 5101 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135637 5101 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135644 5101 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135651 5101 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135659 5101 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135694 5101 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135701 5101 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135708 5101 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135718 5101 flags.go:64] FLAG: --pod-cidr="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135725 5101 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135739 5101 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135746 5101 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135753 5101 flags.go:64] FLAG: --pods-per-core="0" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135760 5101 flags.go:64] FLAG: --port="10250" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135767 5101 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135774 5101 flags.go:64] FLAG: --provider-id="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135782 5101 flags.go:64] FLAG: --qos-reserved="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135789 5101 flags.go:64] FLAG: --read-only-port="10255" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135796 5101 flags.go:64] FLAG: --register-node="true" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135807 5101 flags.go:64] FLAG: --register-schedulable="true" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135814 5101 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135829 5101 flags.go:64] FLAG: --registry-burst="10" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135836 5101 flags.go:64] FLAG: --registry-qps="5" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135842 5101 flags.go:64] FLAG: --reserved-cpus="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135849 5101 flags.go:64] FLAG: --reserved-memory="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135858 5101 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135866 5101 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135874 5101 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135881 5101 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135887 5101 flags.go:64] FLAG: --runonce="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135915 5101 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135925 5101 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135932 5101 flags.go:64] FLAG: --seccomp-default="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135939 5101 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135946 5101 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135954 5101 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135961 5101 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135969 5101 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135976 5101 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135983 5101 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.135990 5101 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136008 5101 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136016 5101 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136023 5101 flags.go:64] FLAG: --system-cgroups="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136030 5101 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136044 5101 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136051 5101 flags.go:64] FLAG: --tls-cert-file="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136058 5101 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136069 5101 flags.go:64] FLAG: --tls-min-version="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136076 5101 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136083 5101 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136095 5101 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136102 5101 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136109 5101 flags.go:64] FLAG: --v="2" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136120 5101 flags.go:64] FLAG: --version="false" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136130 5101 flags.go:64] FLAG: --vmodule="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136140 5101 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.136150 5101 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136367 5101 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136378 5101 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136384 5101 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136391 5101 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136443 5101 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136453 5101 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136463 5101 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136472 5101 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136480 5101 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136487 5101 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136494 5101 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136500 5101 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136507 5101 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136513 5101 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136519 5101 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136525 5101 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136534 5101 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136540 5101 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136546 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136552 5101 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136561 5101 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136567 5101 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136574 5101 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136580 5101 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136588 5101 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136601 5101 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136608 5101 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136615 5101 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136622 5101 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136628 5101 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136634 5101 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136641 5101 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136647 5101 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136654 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136663 5101 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136669 5101 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136679 5101 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136685 5101 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136692 5101 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136698 5101 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136704 5101 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136710 5101 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136716 5101 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136722 5101 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136729 5101 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136735 5101 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136741 5101 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136748 5101 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136755 5101 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136763 5101 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136769 5101 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136775 5101 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136782 5101 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136788 5101 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136794 5101 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136801 5101 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136807 5101 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136818 5101 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136824 5101 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136831 5101 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136837 5101 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136843 5101 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136849 5101 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136856 5101 feature_gate.go:328] unrecognized feature gate: Example Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136862 5101 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136868 5101 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136874 5101 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136880 5101 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136890 5101 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136896 5101 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136902 5101 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136909 5101 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136915 5101 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136922 5101 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136928 5101 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136935 5101 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136940 5101 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136946 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136953 5101 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136959 5101 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136965 5101 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136971 5101 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136980 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136986 5101 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136992 5101 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.136998 5101 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.137342 5101 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.148765 5101 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.149167 5101 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149260 5101 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149270 5101 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149275 5101 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149279 5101 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149283 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149286 5101 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149289 5101 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149293 5101 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149297 5101 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149300 5101 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149303 5101 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149307 5101 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149310 5101 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149314 5101 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149317 5101 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149320 5101 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149323 5101 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149327 5101 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149330 5101 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149333 5101 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149337 5101 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149341 5101 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149344 5101 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149348 5101 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149351 5101 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149356 5101 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149359 5101 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149363 5101 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149366 5101 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149369 5101 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149372 5101 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149376 5101 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149380 5101 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149383 5101 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149387 5101 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149390 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149393 5101 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149397 5101 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149402 5101 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149405 5101 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149408 5101 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149411 5101 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149435 5101 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149438 5101 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149441 5101 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149445 5101 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149448 5101 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149451 5101 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149455 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149458 5101 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149461 5101 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149464 5101 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149468 5101 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149471 5101 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149474 5101 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149478 5101 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149481 5101 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149484 5101 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149489 5101 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149494 5101 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149498 5101 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149502 5101 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149506 5101 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149511 5101 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149515 5101 feature_gate.go:328] unrecognized feature gate: Example Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149521 5101 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149527 5101 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149531 5101 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149535 5101 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149539 5101 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149545 5101 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149549 5101 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149553 5101 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149557 5101 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149560 5101 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149564 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149568 5101 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149572 5101 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149576 5101 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149580 5101 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149584 5101 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149588 5101 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149593 5101 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149598 5101 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149602 5101 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149605 5101 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.149611 5101 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149749 5101 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149757 5101 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149761 5101 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149765 5101 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149770 5101 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149773 5101 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149777 5101 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149780 5101 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149784 5101 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149789 5101 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149794 5101 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149798 5101 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149802 5101 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149806 5101 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149810 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149814 5101 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149818 5101 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149823 5101 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149826 5101 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149829 5101 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149833 5101 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149836 5101 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149840 5101 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149844 5101 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149847 5101 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149851 5101 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149855 5101 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149858 5101 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149863 5101 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149867 5101 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149871 5101 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149875 5101 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149879 5101 feature_gate.go:328] unrecognized feature gate: Example Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149883 5101 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149887 5101 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149891 5101 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149896 5101 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149902 5101 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149906 5101 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149910 5101 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149915 5101 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149919 5101 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149922 5101 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149926 5101 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149931 5101 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149934 5101 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149938 5101 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149942 5101 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149946 5101 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149950 5101 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149954 5101 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149958 5101 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149962 5101 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149965 5101 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149969 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149974 5101 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149978 5101 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149981 5101 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149985 5101 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149989 5101 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149994 5101 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.149999 5101 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150005 5101 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150009 5101 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150013 5101 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150018 5101 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150022 5101 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150026 5101 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150030 5101 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150034 5101 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150045 5101 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150049 5101 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150053 5101 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150057 5101 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150061 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150066 5101 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150070 5101 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150074 5101 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150078 5101 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150083 5101 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150087 5101 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150092 5101 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150096 5101 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150100 5101 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150104 5101 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 09:52:08 crc kubenswrapper[5101]: W0122 09:52:08.150108 5101 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.150114 5101 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.150533 5101 server.go:962] "Client rotation is on, will bootstrap in background" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.153087 5101 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.156402 5101 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.156576 5101 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.157086 5101 server.go:1019] "Starting client certificate rotation" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.157317 5101 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.157445 5101 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.163128 5101 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.165130 5101 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.169228 5101 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.176175 5101 log.go:25] "Validated CRI v1 runtime API" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.196196 5101 log.go:25] "Validated CRI v1 image API" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.197822 5101 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.200127 5101 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-22-09-45-55-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.200165 5101 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.216628 5101 manager.go:217] Machine: {Timestamp:2026-01-22 09:52:08.215216505 +0000 UTC m=+0.658846792 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7 BootID:ae597ad9-e613-4a07-817c-9064cdd0d814 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:72:28:9e Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:72:28:9e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:eb:d8:bf Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:cf:7c:9a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:8b:22:d3 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ab:19:72 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7a:9e:38:d6:bf:69 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2a:55:18:fe:a8:ef Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.216917 5101 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.217221 5101 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.218608 5101 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.218664 5101 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.218974 5101 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.218986 5101 container_manager_linux.go:306] "Creating device plugin manager" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.219021 5101 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.219056 5101 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.219529 5101 state_mem.go:36] "Initialized new in-memory state store" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.219689 5101 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.220317 5101 kubelet.go:491] "Attempting to sync node with API server" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.220358 5101 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.220404 5101 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.220437 5101 kubelet.go:397] "Adding apiserver pod source" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.220468 5101 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.223295 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.223861 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.334918 5101 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.335218 5101 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.336386 5101 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.336482 5101 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.410529 5101 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.410904 5101 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.411388 5101 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.411790 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.411814 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.411824 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.411833 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.411841 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.411847 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.411859 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.411868 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.411879 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.411892 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.411902 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.412086 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.412409 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.412433 5101 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.414066 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.422586 5101 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.422688 5101 server.go:1295] "Started kubelet" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.423011 5101 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.423182 5101 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.423817 5101 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.423897 5101 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 09:52:08 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.425390 5101 server.go:317] "Adding debug handlers to kubelet server" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.426229 5101 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.427232 5101 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.427546 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.427980 5101 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.428374 5101 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.428407 5101 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.428538 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.428722 5101 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d04d1f477b4b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.422634681 +0000 UTC m=+0.866264948,LastTimestamp:2026-01-22 09:52:08.422634681 +0000 UTC m=+0.866264948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.429782 5101 factory.go:55] Registering systemd factory Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.429874 5101 factory.go:223] Registration of the systemd container factory successfully Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.429959 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.433413 5101 factory.go:153] Registering CRI-O factory Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.433455 5101 factory.go:223] Registration of the crio container factory successfully Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.433533 5101 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.433583 5101 factory.go:103] Registering Raw factory Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.433610 5101 manager.go:1196] Started watching for new ooms in manager Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.435257 5101 manager.go:319] Starting recovery of all containers Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.470929 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471036 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471048 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471056 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471063 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471071 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471080 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471088 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471098 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471106 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471113 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471127 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471135 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471143 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471168 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471191 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471212 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471220 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471227 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471235 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471243 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471251 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471282 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471295 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471316 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471324 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471346 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471355 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471397 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471406 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471413 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471453 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471474 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471484 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471492 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471501 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471509 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471517 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471525 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471533 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471557 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471566 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471574 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471594 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471602 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471610 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471632 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471641 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471672 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471681 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471689 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471698 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471706 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471714 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471721 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471729 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471756 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471764 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471782 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471790 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471829 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471841 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471851 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471858 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471881 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471889 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471897 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471905 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471913 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471921 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471929 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471937 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471975 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471983 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471991 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.471998 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472005 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472013 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472022 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472029 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472054 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472090 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472099 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472107 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472115 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472124 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472132 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472140 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472160 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472168 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472176 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472184 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472201 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472208 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472216 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472223 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472241 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472248 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472281 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472289 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472297 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472308 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472328 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472335 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472354 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472362 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472370 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472377 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472386 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472394 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472403 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472412 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472472 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472481 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472515 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472523 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472531 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472539 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472547 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472555 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472571 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472579 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472586 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472594 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472610 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472620 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472635 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472642 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472659 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472667 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472674 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472682 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472689 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472696 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472704 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472712 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472746 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472754 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472762 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472770 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472777 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472786 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472820 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472828 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.472837 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475187 5101 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475242 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475258 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475270 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475281 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475291 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475303 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475313 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475324 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475334 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475362 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475374 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475384 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475396 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475407 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475432 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475442 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475451 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475462 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475472 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475482 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475495 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475505 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475516 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475525 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475537 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475548 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475559 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475568 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475577 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475588 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475599 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475609 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475618 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475628 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475636 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475646 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475656 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475665 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475674 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475683 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475693 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475704 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475714 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475722 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475732 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475742 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475752 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475761 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475771 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475780 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475789 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475798 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475809 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475819 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475827 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475836 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475844 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475853 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475862 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475871 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475880 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475889 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475899 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475908 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475916 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475926 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475934 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475950 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475959 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475967 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475976 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475984 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.475992 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476001 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476009 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476017 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476028 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476037 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476047 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476112 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476124 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476133 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476142 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476152 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476160 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476168 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476177 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476186 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476195 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476203 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476212 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476220 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476228 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476236 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476245 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476254 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476262 5101 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476271 5101 reconstruct.go:97] "Volume reconstruction finished" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.476283 5101 reconciler.go:26] "Reconciler: start to sync state" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.481931 5101 manager.go:324] Recovery completed Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.493231 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.494725 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.494775 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.494785 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.495739 5101 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.495759 5101 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.495789 5101 state_mem.go:36] "Initialized new in-memory state store" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.509828 5101 policy_none.go:49] "None policy: Start" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.509886 5101 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.509909 5101 state_mem.go:35] "Initializing new in-memory state store" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.524377 5101 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.527095 5101 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.527167 5101 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.527204 5101 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.527234 5101 kubelet.go:2451] "Starting kubelet main sync loop" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.527299 5101 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.527631 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.529770 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.546802 5101 manager.go:341] "Starting Device Plugin manager" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.546955 5101 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.546973 5101 server.go:85] "Starting device plugin registration server" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.547573 5101 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.547603 5101 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.548035 5101 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.548113 5101 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.548124 5101 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.556838 5101 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.556964 5101 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.628336 5101 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.628589 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.629353 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.629384 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.629395 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.629716 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.630015 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.630338 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.630386 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.630532 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.630563 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.630573 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.631057 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.631180 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.631217 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.631183 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.631294 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.631304 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.631438 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.631463 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.631473 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.631599 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.631622 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.631631 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.632011 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.632160 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.632202 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.632408 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.632440 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.632451 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.632880 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.632904 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.632913 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.633029 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.633103 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.633166 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.633395 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.633428 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.633439 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.633815 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.633837 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.633846 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.634171 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.634201 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.634612 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.634634 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.634642 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.648687 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.649768 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.649815 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.649828 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.649857 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.650438 5101 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.662594 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.667292 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.689360 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.705709 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.711993 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.780554 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.780617 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.780642 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.780661 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.780688 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.780786 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781075 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781128 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781165 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781343 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781470 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781524 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781554 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781590 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781615 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781671 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781804 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781815 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.781931 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.782017 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.782021 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.782200 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.782190 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.782309 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.782357 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.782387 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.782688 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.782705 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.782750 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.782957 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.850896 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.852468 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.852539 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.852553 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.852589 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.853219 5101 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883412 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883519 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883542 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883562 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883590 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883622 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883642 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883645 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883729 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883675 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883785 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883789 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883808 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883820 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883826 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883849 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883863 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883878 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883906 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883929 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883885 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883968 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.883973 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.884017 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.884040 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.884036 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.884070 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.884097 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.884119 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.884138 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.884160 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.884017 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: E0122 09:52:08.950588 5101 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d04d1f477b4b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.422634681 +0000 UTC m=+0.866264948,LastTimestamp:2026-01-22 09:52:08.422634681 +0000 UTC m=+0.866264948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.964118 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.968076 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:08 crc kubenswrapper[5101]: I0122 09:52:08.991042 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:52:09 crc kubenswrapper[5101]: W0122 09:52:09.005732 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-6b57ba418a8ea6c28d599f4232ee95eb10b0013a4e52923d730109d9d66dbca9 WatchSource:0}: Error finding container 6b57ba418a8ea6c28d599f4232ee95eb10b0013a4e52923d730109d9d66dbca9: Status 404 returned error can't find the container with id 6b57ba418a8ea6c28d599f4232ee95eb10b0013a4e52923d730109d9d66dbca9 Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.006644 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.009790 5101 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.013446 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 09:52:09 crc kubenswrapper[5101]: W0122 09:52:09.022372 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-61731bc5a00916334188c2f78e2a4f1bd601629e606a47e43a39b85ab6715efd WatchSource:0}: Error finding container 61731bc5a00916334188c2f78e2a4f1bd601629e606a47e43a39b85ab6715efd: Status 404 returned error can't find the container with id 61731bc5a00916334188c2f78e2a4f1bd601629e606a47e43a39b85ab6715efd Jan 22 09:52:09 crc kubenswrapper[5101]: W0122 09:52:09.030286 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-259191c8e61e3244acf3d67e258375bf3613b6db8d4e38553d8564a8ec9103a6 WatchSource:0}: Error finding container 259191c8e61e3244acf3d67e258375bf3613b6db8d4e38553d8564a8ec9103a6: Status 404 returned error can't find the container with id 259191c8e61e3244acf3d67e258375bf3613b6db8d4e38553d8564a8ec9103a6 Jan 22 09:52:09 crc kubenswrapper[5101]: E0122 09:52:09.030515 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Jan 22 09:52:09 crc kubenswrapper[5101]: W0122 09:52:09.030884 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-ba5d59b8c6c21f1213681cca825e6defcb14c20408bf261ea5442fd4758df9e3 WatchSource:0}: Error finding container ba5d59b8c6c21f1213681cca825e6defcb14c20408bf261ea5442fd4758df9e3: Status 404 returned error can't find the container with id ba5d59b8c6c21f1213681cca825e6defcb14c20408bf261ea5442fd4758df9e3 Jan 22 09:52:09 crc kubenswrapper[5101]: E0122 09:52:09.179948 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.253500 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.254391 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.254460 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.254475 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.254503 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:52:09 crc kubenswrapper[5101]: E0122 09:52:09.254960 5101 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 22 09:52:09 crc kubenswrapper[5101]: E0122 09:52:09.411970 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.415566 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.542959 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"ba5d59b8c6c21f1213681cca825e6defcb14c20408bf261ea5442fd4758df9e3"} Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.549039 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"259191c8e61e3244acf3d67e258375bf3613b6db8d4e38553d8564a8ec9103a6"} Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.551222 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"61731bc5a00916334188c2f78e2a4f1bd601629e606a47e43a39b85ab6715efd"} Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.553029 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"5f135c4223b6005e0b98209930f60b72b5624750d98690e990dc496e4f08f495"} Jan 22 09:52:09 crc kubenswrapper[5101]: I0122 09:52:09.554186 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6b57ba418a8ea6c28d599f4232ee95eb10b0013a4e52923d730109d9d66dbca9"} Jan 22 09:52:09 crc kubenswrapper[5101]: E0122 09:52:09.785289 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 09:52:09 crc kubenswrapper[5101]: E0122 09:52:09.832226 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Jan 22 09:52:09 crc kubenswrapper[5101]: E0122 09:52:09.973362 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.055354 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.058883 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.058953 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.058973 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.059004 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:52:10 crc kubenswrapper[5101]: E0122 09:52:10.059772 5101 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.350157 5101 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 09:52:10 crc kubenswrapper[5101]: E0122 09:52:10.352838 5101 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.417025 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.591891 5101 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="89f4830ad58f953495c2fb4ac1e08c64d2fe2fd2607a324b0523b1ba4890d434" exitCode=0 Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.592008 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"89f4830ad58f953495c2fb4ac1e08c64d2fe2fd2607a324b0523b1ba4890d434"} Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.592100 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.594035 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.594089 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.594108 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:10 crc kubenswrapper[5101]: E0122 09:52:10.594361 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.606564 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"86f3894614da3147910d65833f0c6bc7534abcecacc86ae6b4ab118351aa6206"} Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.606634 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"1e0fdef7e068877da6a86fa0b15c2d38514c28f6645ddbfab0a7598309b595a9"} Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.609104 5101 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326" exitCode=0 Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.609168 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326"} Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.609458 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.610559 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.610611 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.610626 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:10 crc kubenswrapper[5101]: E0122 09:52:10.610966 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.611333 5101 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="56b6ac3c0342faa32f6a25460fbf9cc517ae38d8150ec17731ffceabbaf06303" exitCode=0 Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.611441 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"56b6ac3c0342faa32f6a25460fbf9cc517ae38d8150ec17731ffceabbaf06303"} Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.611721 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.612710 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.612748 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.612760 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.612766 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:10 crc kubenswrapper[5101]: E0122 09:52:10.612993 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.614032 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.614063 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.614075 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.614823 5101 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="0675c15aea42b9ac09729a37f01e833d6164f12d6ab14fd585684793a19207ef" exitCode=0 Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.614858 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"0675c15aea42b9ac09729a37f01e833d6164f12d6ab14fd585684793a19207ef"} Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.615050 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.615981 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.616016 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:10 crc kubenswrapper[5101]: I0122 09:52:10.616030 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:10 crc kubenswrapper[5101]: E0122 09:52:10.616190 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.415403 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 22 09:52:11 crc kubenswrapper[5101]: E0122 09:52:11.433236 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.633212 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"333d46208c759a5c89c03961c535ca7fbac296abd5bcf55e6438be51c57d2418"} Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.633284 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.636739 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.636785 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.636800 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:11 crc kubenswrapper[5101]: E0122 09:52:11.637047 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.641631 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b5537fc13540bda46d5a4c4b17f797bc83183247a32e70554b1635d1b974da1f"} Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.641697 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"2b16d75a0b02329061065edfa62b83d1a4f07d842a78692e1f9d2132cc368d3d"} Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.644897 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5"} Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.650749 5101 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="2a93d3e7f0e3cbabdb74cd6205277ecc524f24bbb13e64119552404b1be914ca" exitCode=0 Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.650808 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"2a93d3e7f0e3cbabdb74cd6205277ecc524f24bbb13e64119552404b1be914ca"} Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.651026 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.651611 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.651636 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.651646 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:11 crc kubenswrapper[5101]: E0122 09:52:11.651860 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:11 crc kubenswrapper[5101]: E0122 09:52:11.654199 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.660645 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.661533 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.661580 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.661600 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:11 crc kubenswrapper[5101]: I0122 09:52:11.661633 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:52:11 crc kubenswrapper[5101]: E0122 09:52:11.662132 5101 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 22 09:52:11 crc kubenswrapper[5101]: E0122 09:52:11.757705 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 09:52:11 crc kubenswrapper[5101]: E0122 09:52:11.856026 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.414888 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.655661 5101 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="c8ad54eaaa4a1f5fcbcf758dc56f3c46e074f814983aa103ec941c0713dda324" exitCode=0 Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.655733 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"c8ad54eaaa4a1f5fcbcf758dc56f3c46e074f814983aa103ec941c0713dda324"} Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.655845 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.656512 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.656543 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.656558 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:12 crc kubenswrapper[5101]: E0122 09:52:12.656807 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.657855 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"2bed479d7ddba528ba362dfacf9a4c937cdaa5c6c47a0a85e257cdf288c8d832"} Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.659562 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.663690 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"e94d99614d5483322fcd7d70509229b78e2636a87a3d822ede4e7731750d6ca5"} Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.663744 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"11c301b0019cc0ce818e211b588e3719290a84356f16119fa171487af85d21ea"} Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.663843 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.664525 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.664557 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.664568 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:12 crc kubenswrapper[5101]: E0122 09:52:12.664787 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.666537 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a"} Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.666579 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21"} Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.666678 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.667190 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.667273 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.667364 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:12 crc kubenswrapper[5101]: E0122 09:52:12.667786 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.691179 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.691247 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.691261 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:12 crc kubenswrapper[5101]: E0122 09:52:12.691579 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.840208 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:12 crc kubenswrapper[5101]: I0122 09:52:12.852487 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.671748 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"923f3aad7922b02a55ac48193799b2470a8c483f127b466ecc45551f9735cb9b"} Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.671814 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f"} Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.671895 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.672640 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.672683 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.672696 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:13 crc kubenswrapper[5101]: E0122 09:52:13.672953 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.676640 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"7704a25ab383ef9175497e0e37c2a841354084929391ce2b90efd2dd35aabf91"} Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.676688 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"16a2b2622abd10aa45112a1ee93f3e60e153620424b6d9c047c6f8b2eaf54120"} Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.676747 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"71296c0273ec3f2a6ae3beabb36bbaee0f0b2dc3917cbd4527cb3350a48f471d"} Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.676762 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"fc177986df81bd5ef6c6d984c1b802703e93e08d161ac812ff2ecfe1ab8c25a1"} Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.676810 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.676872 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.676819 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.677575 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.677612 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.677627 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.677768 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.677815 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:13 crc kubenswrapper[5101]: E0122 09:52:13.677937 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.677948 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:13 crc kubenswrapper[5101]: E0122 09:52:13.679032 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:13 crc kubenswrapper[5101]: I0122 09:52:13.832004 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.603295 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.672112 5101 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.683551 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"f9fb34bb1e8b777fcb9f3bd8727c917e980b2e42d41c032a6bcb123864464c66"} Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.683649 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.683653 5101 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.683677 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.683712 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.683739 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.684238 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.684271 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.684282 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:14 crc kubenswrapper[5101]: E0122 09:52:14.684503 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.685100 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.685132 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.685142 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.685221 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.685250 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.685265 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:14 crc kubenswrapper[5101]: E0122 09:52:14.685393 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:14 crc kubenswrapper[5101]: E0122 09:52:14.685774 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.685840 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.685869 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.685880 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:14 crc kubenswrapper[5101]: E0122 09:52:14.686205 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.862485 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.863477 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.863525 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.863537 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.863561 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.885300 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:14 crc kubenswrapper[5101]: I0122 09:52:14.894514 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:15 crc kubenswrapper[5101]: I0122 09:52:15.686774 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:15 crc kubenswrapper[5101]: I0122 09:52:15.686947 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:15 crc kubenswrapper[5101]: I0122 09:52:15.686985 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:15 crc kubenswrapper[5101]: I0122 09:52:15.688162 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:15 crc kubenswrapper[5101]: I0122 09:52:15.688202 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:15 crc kubenswrapper[5101]: I0122 09:52:15.688206 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:15 crc kubenswrapper[5101]: I0122 09:52:15.688249 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:15 crc kubenswrapper[5101]: I0122 09:52:15.688268 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:15 crc kubenswrapper[5101]: I0122 09:52:15.688213 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:15 crc kubenswrapper[5101]: I0122 09:52:15.688712 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:15 crc kubenswrapper[5101]: I0122 09:52:15.688750 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:15 crc kubenswrapper[5101]: I0122 09:52:15.688765 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:15 crc kubenswrapper[5101]: E0122 09:52:15.688848 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:15 crc kubenswrapper[5101]: E0122 09:52:15.689087 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:15 crc kubenswrapper[5101]: E0122 09:52:15.689335 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:16 crc kubenswrapper[5101]: I0122 09:52:16.689863 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:16 crc kubenswrapper[5101]: I0122 09:52:16.689914 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:16 crc kubenswrapper[5101]: I0122 09:52:16.690709 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:16 crc kubenswrapper[5101]: I0122 09:52:16.690751 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:16 crc kubenswrapper[5101]: I0122 09:52:16.690774 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:16 crc kubenswrapper[5101]: I0122 09:52:16.690773 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:16 crc kubenswrapper[5101]: I0122 09:52:16.690812 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:16 crc kubenswrapper[5101]: I0122 09:52:16.690826 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:16 crc kubenswrapper[5101]: E0122 09:52:16.691684 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:16 crc kubenswrapper[5101]: E0122 09:52:16.693825 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:17 crc kubenswrapper[5101]: I0122 09:52:17.852632 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 09:52:17 crc kubenswrapper[5101]: I0122 09:52:17.852927 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:17 crc kubenswrapper[5101]: I0122 09:52:17.853823 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:17 crc kubenswrapper[5101]: I0122 09:52:17.853868 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:17 crc kubenswrapper[5101]: I0122 09:52:17.853877 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:17 crc kubenswrapper[5101]: E0122 09:52:17.854231 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:18 crc kubenswrapper[5101]: I0122 09:52:18.008653 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 22 09:52:18 crc kubenswrapper[5101]: I0122 09:52:18.490660 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:18 crc kubenswrapper[5101]: I0122 09:52:18.490963 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:18 crc kubenswrapper[5101]: I0122 09:52:18.491988 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:18 crc kubenswrapper[5101]: I0122 09:52:18.492038 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:18 crc kubenswrapper[5101]: I0122 09:52:18.492048 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:18 crc kubenswrapper[5101]: E0122 09:52:18.492401 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:18 crc kubenswrapper[5101]: E0122 09:52:18.557207 5101 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:52:18 crc kubenswrapper[5101]: I0122 09:52:18.695410 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:18 crc kubenswrapper[5101]: I0122 09:52:18.696547 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:18 crc kubenswrapper[5101]: I0122 09:52:18.696628 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:18 crc kubenswrapper[5101]: I0122 09:52:18.696654 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:18 crc kubenswrapper[5101]: E0122 09:52:18.697490 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:22 crc kubenswrapper[5101]: I0122 09:52:22.372247 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:22 crc kubenswrapper[5101]: I0122 09:52:22.372586 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:22 crc kubenswrapper[5101]: I0122 09:52:22.373902 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:22 crc kubenswrapper[5101]: I0122 09:52:22.373966 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:22 crc kubenswrapper[5101]: I0122 09:52:22.373981 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:22 crc kubenswrapper[5101]: E0122 09:52:22.374525 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:22 crc kubenswrapper[5101]: I0122 09:52:22.377406 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:22 crc kubenswrapper[5101]: I0122 09:52:22.740932 5101 trace.go:236] Trace[1306488227]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 09:52:12.738) (total time: 10001ms): Jan 22 09:52:22 crc kubenswrapper[5101]: Trace[1306488227]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:52:22.740) Jan 22 09:52:22 crc kubenswrapper[5101]: Trace[1306488227]: [10.00187287s] [10.00187287s] END Jan 22 09:52:22 crc kubenswrapper[5101]: E0122 09:52:22.741020 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 09:52:22 crc kubenswrapper[5101]: I0122 09:52:22.764453 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:22 crc kubenswrapper[5101]: I0122 09:52:22.765386 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:22 crc kubenswrapper[5101]: I0122 09:52:22.765470 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:22 crc kubenswrapper[5101]: I0122 09:52:22.765485 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:22 crc kubenswrapper[5101]: E0122 09:52:22.765998 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:23 crc kubenswrapper[5101]: I0122 09:52:23.416021 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 22 09:52:24 crc kubenswrapper[5101]: I0122 09:52:24.441332 5101 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 09:52:24 crc kubenswrapper[5101]: I0122 09:52:24.441457 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 09:52:24 crc kubenswrapper[5101]: I0122 09:52:24.456179 5101 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 09:52:24 crc kubenswrapper[5101]: I0122 09:52:24.456340 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 09:52:24 crc kubenswrapper[5101]: E0122 09:52:24.693135 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 22 09:52:25 crc kubenswrapper[5101]: I0122 09:52:25.373084 5101 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:52:25 crc kubenswrapper[5101]: I0122 09:52:25.373180 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 09:52:27 crc kubenswrapper[5101]: E0122 09:52:27.407149 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.090101 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.090718 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.092025 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.092115 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.092128 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:28 crc kubenswrapper[5101]: E0122 09:52:28.092549 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.106713 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.498002 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.498483 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.499856 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.499918 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.499937 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:28 crc kubenswrapper[5101]: E0122 09:52:28.500518 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.503836 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:28 crc kubenswrapper[5101]: E0122 09:52:28.557469 5101 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.785051 5101 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.785110 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.785297 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.786018 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.786057 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.786071 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.786341 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:28 crc kubenswrapper[5101]: E0122 09:52:28.786391 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.786410 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:28 crc kubenswrapper[5101]: I0122 09:52:28.786448 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:28 crc kubenswrapper[5101]: E0122 09:52:28.786989 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.443841 5101 trace.go:236] Trace[1489558767]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 09:52:15.272) (total time: 14171ms): Jan 22 09:52:29 crc kubenswrapper[5101]: Trace[1489558767]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 14171ms (09:52:29.443) Jan 22 09:52:29 crc kubenswrapper[5101]: Trace[1489558767]: [14.171190158s] [14.171190158s] END Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.443898 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.443923 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.444018 5101 trace.go:236] Trace[530662767]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 09:52:15.671) (total time: 13772ms): Jan 22 09:52:29 crc kubenswrapper[5101]: Trace[530662767]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 13772ms (09:52:29.443) Jan 22 09:52:29 crc kubenswrapper[5101]: Trace[530662767]: [13.772812007s] [13.772812007s] END Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.443946 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f477b4b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.422634681 +0000 UTC m=+0.866264948,LastTimestamp:2026-01-22 09:52:08.422634681 +0000 UTC m=+0.866264948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.444083 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.444013 5101 trace.go:236] Trace[706671686]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 09:52:16.707) (total time: 12735ms): Jan 22 09:52:29 crc kubenswrapper[5101]: Trace[706671686]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 12735ms (09:52:29.443) Jan 22 09:52:29 crc kubenswrapper[5101]: Trace[706671686]: [12.735979419s] [12.735979419s] END Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.444126 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.445984 5101 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.446873 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c43b28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494758696 +0000 UTC m=+0.938388963,LastTimestamp:2026-01-22 09:52:08.494758696 +0000 UTC m=+0.938388963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.448967 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c49119 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494780697 +0000 UTC m=+0.938410964,LastTimestamp:2026-01-22 09:52:08.494780697 +0000 UTC m=+0.938410964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.454605 5101 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.459031 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c4b54f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494789967 +0000 UTC m=+0.938420234,LastTimestamp:2026-01-22 09:52:08.494789967 +0000 UTC m=+0.938420234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.463987 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1fc649655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.555599445 +0000 UTC m=+0.999229712,LastTimestamp:2026-01-22 09:52:08.555599445 +0000 UTC m=+0.999229712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.476645 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c43b28\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c43b28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494758696 +0000 UTC m=+0.938388963,LastTimestamp:2026-01-22 09:52:08.629372219 +0000 UTC m=+1.073002486,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.481291 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c49119\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c49119 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494780697 +0000 UTC m=+0.938410964,LastTimestamp:2026-01-22 09:52:08.629389339 +0000 UTC m=+1.073019606,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.487152 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c4b54f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c4b54f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494789967 +0000 UTC m=+0.938420234,LastTimestamp:2026-01-22 09:52:08.629400509 +0000 UTC m=+1.073030776,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.493328 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c43b28\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c43b28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494758696 +0000 UTC m=+0.938388963,LastTimestamp:2026-01-22 09:52:08.630551422 +0000 UTC m=+1.074181689,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.498228 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c49119\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c49119 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494780697 +0000 UTC m=+0.938410964,LastTimestamp:2026-01-22 09:52:08.630568842 +0000 UTC m=+1.074199109,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.503379 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c4b54f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c4b54f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494789967 +0000 UTC m=+0.938420234,LastTimestamp:2026-01-22 09:52:08.630577923 +0000 UTC m=+1.074208190,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.508293 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c43b28\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c43b28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494758696 +0000 UTC m=+0.938388963,LastTimestamp:2026-01-22 09:52:08.631282791 +0000 UTC m=+1.074913058,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.514030 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c49119\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c49119 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494780697 +0000 UTC m=+0.938410964,LastTimestamp:2026-01-22 09:52:08.631300221 +0000 UTC m=+1.074930488,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.525724 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c4b54f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c4b54f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494789967 +0000 UTC m=+0.938420234,LastTimestamp:2026-01-22 09:52:08.631308351 +0000 UTC m=+1.074938618,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.530350 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c43b28\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c43b28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494758696 +0000 UTC m=+0.938388963,LastTimestamp:2026-01-22 09:52:08.631453042 +0000 UTC m=+1.075083299,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.535594 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c49119\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c49119 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494780697 +0000 UTC m=+0.938410964,LastTimestamp:2026-01-22 09:52:08.631467583 +0000 UTC m=+1.075097850,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.540290 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c4b54f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c4b54f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494789967 +0000 UTC m=+0.938420234,LastTimestamp:2026-01-22 09:52:08.631477893 +0000 UTC m=+1.075108160,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.546587 5101 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44020->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.546627 5101 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44012->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.546654 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44020->192.168.126.11:17697: read: connection reset by peer" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.546665 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44012->192.168.126.11:17697: read: connection reset by peer" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.546594 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c43b28\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c43b28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494758696 +0000 UTC m=+0.938388963,LastTimestamp:2026-01-22 09:52:08.631612204 +0000 UTC m=+1.075242471,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.550729 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c49119\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c49119 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494780697 +0000 UTC m=+0.938410964,LastTimestamp:2026-01-22 09:52:08.631627124 +0000 UTC m=+1.075257391,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.551617 5101 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44026->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.551681 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44026->192.168.126.11:17697: read: connection reset by peer" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.555923 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c4b54f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c4b54f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494789967 +0000 UTC m=+0.938420234,LastTimestamp:2026-01-22 09:52:08.631635875 +0000 UTC m=+1.075266142,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.560402 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c43b28\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c43b28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494758696 +0000 UTC m=+0.938388963,LastTimestamp:2026-01-22 09:52:08.632428674 +0000 UTC m=+1.076058941,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.565804 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c49119\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c49119 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494780697 +0000 UTC m=+0.938410964,LastTimestamp:2026-01-22 09:52:08.632445294 +0000 UTC m=+1.076075561,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.570099 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c4b54f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c4b54f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494789967 +0000 UTC m=+0.938420234,LastTimestamp:2026-01-22 09:52:08.632455334 +0000 UTC m=+1.076085601,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.574025 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c43b28\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c43b28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494758696 +0000 UTC m=+0.938388963,LastTimestamp:2026-01-22 09:52:08.632893099 +0000 UTC m=+1.076523366,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.578746 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d04d1f8c49119\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d04d1f8c49119 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:08.494780697 +0000 UTC m=+0.938410964,LastTimestamp:2026-01-22 09:52:08.632908949 +0000 UTC m=+1.076539206,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.593283 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2177ccbb7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.010170807 +0000 UTC m=+1.453801074,LastTimestamp:2026-01-22 09:52:09.010170807 +0000 UTC m=+1.453801074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.597933 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d2177dfc78 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.010248824 +0000 UTC m=+1.453879091,LastTimestamp:2026-01-22 09:52:09.010248824 +0000 UTC m=+1.453879091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.603866 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d04d218581df4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.024544244 +0000 UTC m=+1.468174511,LastTimestamp:2026-01-22 09:52:09.024544244 +0000 UTC m=+1.468174511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.608776 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d218cef501 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.032332545 +0000 UTC m=+1.475962812,LastTimestamp:2026-01-22 09:52:09.032332545 +0000 UTC m=+1.475962812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.613393 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d04d218d74417 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.032877079 +0000 UTC m=+1.476507346,LastTimestamp:2026-01-22 09:52:09.032877079 +0000 UTC m=+1.476507346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.618927 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d04d244301d8a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.760120202 +0000 UTC m=+2.203750469,LastTimestamp:2026-01-22 09:52:09.760120202 +0000 UTC m=+2.203750469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.624112 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d04d24438c915 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.760688405 +0000 UTC m=+2.204318672,LastTimestamp:2026-01-22 09:52:09.760688405 +0000 UTC m=+2.204318672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.628252 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d244391537 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.760707895 +0000 UTC m=+2.204338162,LastTimestamp:2026-01-22 09:52:09.760707895 +0000 UTC m=+2.204338162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.632900 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d244399146 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.760739654 +0000 UTC m=+2.204369951,LastTimestamp:2026-01-22 09:52:09.760739654 +0000 UTC m=+2.204369951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.638253 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2443b3d13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.760849171 +0000 UTC m=+2.204479438,LastTimestamp:2026-01-22 09:52:09.760849171 +0000 UTC m=+2.204479438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.643094 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d244e194c8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.7717506 +0000 UTC m=+2.215380867,LastTimestamp:2026-01-22 09:52:09.7717506 +0000 UTC m=+2.215380867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.647580 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d244f263dc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.772852188 +0000 UTC m=+2.216482455,LastTimestamp:2026-01-22 09:52:09.772852188 +0000 UTC m=+2.216482455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.652019 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d04d244f3cc47 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.772944455 +0000 UTC m=+2.216574722,LastTimestamp:2026-01-22 09:52:09.772944455 +0000 UTC m=+2.216574722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.656352 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d24512ffb3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.774989235 +0000 UTC m=+2.218619502,LastTimestamp:2026-01-22 09:52:09.774989235 +0000 UTC m=+2.218619502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.662712 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2453f1332 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.77787781 +0000 UTC m=+2.221508077,LastTimestamp:2026-01-22 09:52:09.77787781 +0000 UTC m=+2.221508077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.667626 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d04d245b5558a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:09.785628042 +0000 UTC m=+2.229258309,LastTimestamp:2026-01-22 09:52:09.785628042 +0000 UTC m=+2.229258309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.671990 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d2671f9e6d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:10.346241645 +0000 UTC m=+2.789871912,LastTimestamp:2026-01-22 09:52:10.346241645 +0000 UTC m=+2.789871912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.676582 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d268133a4c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:10.362206796 +0000 UTC m=+2.805837083,LastTimestamp:2026-01-22 09:52:10.362206796 +0000 UTC m=+2.805837083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.680795 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d2682a51c3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:10.363720131 +0000 UTC m=+2.807350418,LastTimestamp:2026-01-22 09:52:10.363720131 +0000 UTC m=+2.807350418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.686012 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d04d2766abfc4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:10.60282362 +0000 UTC m=+3.046453887,LastTimestamp:2026-01-22 09:52:10.60282362 +0000 UTC m=+3.046453887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.691010 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d277005a3f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:10.612628031 +0000 UTC m=+3.056258298,LastTimestamp:2026-01-22 09:52:10.612628031 +0000 UTC m=+3.056258298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.696110 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d2771a4194 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:10.614325652 +0000 UTC m=+3.057955918,LastTimestamp:2026-01-22 09:52:10.614325652 +0000 UTC m=+3.057955918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.700834 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d04d2775b4634 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:10.618586676 +0000 UTC m=+3.062216943,LastTimestamp:2026-01-22 09:52:10.618586676 +0000 UTC m=+3.062216943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.705541 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d297b54399 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.161355161 +0000 UTC m=+3.604985428,LastTimestamp:2026-01-22 09:52:11.161355161 +0000 UTC m=+3.604985428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.710343 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d297b70137 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.161469239 +0000 UTC m=+3.605099506,LastTimestamp:2026-01-22 09:52:11.161469239 +0000 UTC m=+3.605099506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.714243 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d04d297b7d0ac openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.161522348 +0000 UTC m=+3.605152615,LastTimestamp:2026-01-22 09:52:11.161522348 +0000 UTC m=+3.605152615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.718204 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d04d298508c03 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.171531779 +0000 UTC m=+3.615162036,LastTimestamp:2026-01-22 09:52:11.171531779 +0000 UTC m=+3.615162036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.723196 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d04d29863017b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.172741499 +0000 UTC m=+3.616371756,LastTimestamp:2026-01-22 09:52:11.172741499 +0000 UTC m=+3.616371756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.727791 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d299096e1d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.183648285 +0000 UTC m=+3.627278552,LastTimestamp:2026-01-22 09:52:11.183648285 +0000 UTC m=+3.627278552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.731767 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d29912151d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.184215325 +0000 UTC m=+3.627845592,LastTimestamp:2026-01-22 09:52:11.184215325 +0000 UTC m=+3.627845592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.739122 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d29924381f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.185403935 +0000 UTC m=+3.629034202,LastTimestamp:2026-01-22 09:52:11.185403935 +0000 UTC m=+3.629034202,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.743383 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d04d29b5bdc00 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.2226048 +0000 UTC m=+3.666235067,LastTimestamp:2026-01-22 09:52:11.2226048 +0000 UTC m=+3.666235067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.748274 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d04d2a133ba9f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.320638111 +0000 UTC m=+3.764268378,LastTimestamp:2026-01-22 09:52:11.320638111 +0000 UTC m=+3.764268378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.752776 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d04d2ade1d6ce openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.533375182 +0000 UTC m=+3.977005459,LastTimestamp:2026-01-22 09:52:11.533375182 +0000 UTC m=+3.977005459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.754646 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d04d2af90f121 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.561627937 +0000 UTC m=+4.005258204,LastTimestamp:2026-01-22 09:52:11.561627937 +0000 UTC m=+4.005258204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.757682 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d04d2afc3d077 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.564961911 +0000 UTC m=+4.008592178,LastTimestamp:2026-01-22 09:52:11.564961911 +0000 UTC m=+4.008592178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.760503 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d2b5008f18 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.652828952 +0000 UTC m=+4.096459219,LastTimestamp:2026-01-22 09:52:11.652828952 +0000 UTC m=+4.096459219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.762506 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2b516d922 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.654289698 +0000 UTC m=+4.097919965,LastTimestamp:2026-01-22 09:52:11.654289698 +0000 UTC m=+4.097919965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.766204 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d2b52f9357 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.655910231 +0000 UTC m=+4.099540498,LastTimestamp:2026-01-22 09:52:11.655910231 +0000 UTC m=+4.099540498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.771337 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d2b637d037 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.673227319 +0000 UTC m=+4.116857586,LastTimestamp:2026-01-22 09:52:11.673227319 +0000 UTC m=+4.116857586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.775677 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d2b677a793 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.677411219 +0000 UTC m=+4.121041486,LastTimestamp:2026-01-22 09:52:11.677411219 +0000 UTC m=+4.121041486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.781109 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2b7cdee66 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.699842662 +0000 UTC m=+4.143472929,LastTimestamp:2026-01-22 09:52:11.699842662 +0000 UTC m=+4.143472929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.787009 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2b7e3a3c4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.701265348 +0000 UTC m=+4.144895615,LastTimestamp:2026-01-22 09:52:11.701265348 +0000 UTC m=+4.144895615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.788557 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.790618 5101 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="923f3aad7922b02a55ac48193799b2470a8c483f127b466ecc45551f9735cb9b" exitCode=255 Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.790668 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"923f3aad7922b02a55ac48193799b2470a8c483f127b466ecc45551f9735cb9b"} Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.790910 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.791500 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.791533 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.791542 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.792017 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:29 crc kubenswrapper[5101]: I0122 09:52:29.792316 5101 scope.go:117] "RemoveContainer" containerID="923f3aad7922b02a55ac48193799b2470a8c483f127b466ecc45551f9735cb9b" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.795014 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d04d2c3c4acad openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.900562605 +0000 UTC m=+4.344192872,LastTimestamp:2026-01-22 09:52:11.900562605 +0000 UTC m=+4.344192872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.800349 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d04d2c52c781c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.924142108 +0000 UTC m=+4.367772385,LastTimestamp:2026-01-22 09:52:11.924142108 +0000 UTC m=+4.367772385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.805000 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d2c75b5c38 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.960769592 +0000 UTC m=+4.404399859,LastTimestamp:2026-01-22 09:52:11.960769592 +0000 UTC m=+4.404399859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.811113 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d2c879e4a8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:11.979547816 +0000 UTC m=+4.423178083,LastTimestamp:2026-01-22 09:52:11.979547816 +0000 UTC m=+4.423178083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.815613 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d2ce2dbb7d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.075219837 +0000 UTC m=+4.518850104,LastTimestamp:2026-01-22 09:52:12.075219837 +0000 UTC m=+4.518850104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.819979 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d2cef3415b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.088164699 +0000 UTC m=+4.531794966,LastTimestamp:2026-01-22 09:52:12.088164699 +0000 UTC m=+4.531794966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.825464 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2cffcb4c6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.105561286 +0000 UTC m=+4.549191553,LastTimestamp:2026-01-22 09:52:12.105561286 +0000 UTC m=+4.549191553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.830451 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2d1464ce5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.127161573 +0000 UTC m=+4.570791840,LastTimestamp:2026-01-22 09:52:12.127161573 +0000 UTC m=+4.570791840,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.835376 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2d1581192 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.128326034 +0000 UTC m=+4.571956321,LastTimestamp:2026-01-22 09:52:12.128326034 +0000 UTC m=+4.571956321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.841085 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d2f1035454 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.659643476 +0000 UTC m=+5.103273743,LastTimestamp:2026-01-22 09:52:12.659643476 +0000 UTC m=+5.103273743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.846869 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2f3749f9a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.700622746 +0000 UTC m=+5.144253013,LastTimestamp:2026-01-22 09:52:12.700622746 +0000 UTC m=+5.144253013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.851886 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2f4ce01cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.723257805 +0000 UTC m=+5.166888072,LastTimestamp:2026-01-22 09:52:12.723257805 +0000 UTC m=+5.166888072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.858401 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2f4e1a1e6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.724543974 +0000 UTC m=+5.168174241,LastTimestamp:2026-01-22 09:52:12.724543974 +0000 UTC m=+5.168174241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.862627 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d2fe5add88 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.88348404 +0000 UTC m=+5.327114307,LastTimestamp:2026-01-22 09:52:12.88348404 +0000 UTC m=+5.327114307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.866492 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d2ff0ff09d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.895350941 +0000 UTC m=+5.338981208,LastTimestamp:2026-01-22 09:52:12.895350941 +0000 UTC m=+5.338981208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.871275 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d2ff20ed86 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.896464262 +0000 UTC m=+5.340094529,LastTimestamp:2026-01-22 09:52:12.896464262 +0000 UTC m=+5.340094529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.876370 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d3001351cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.912349645 +0000 UTC m=+5.355979912,LastTimestamp:2026-01-22 09:52:12.912349645 +0000 UTC m=+5.355979912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.880809 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d300958c89 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.920884361 +0000 UTC m=+5.364514628,LastTimestamp:2026-01-22 09:52:12.920884361 +0000 UTC m=+5.364514628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.885691 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d30c8b0384 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:13.121520516 +0000 UTC m=+5.565150783,LastTimestamp:2026-01-22 09:52:13.121520516 +0000 UTC m=+5.565150783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.889893 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d30d442194 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:13.133652372 +0000 UTC m=+5.577282649,LastTimestamp:2026-01-22 09:52:13.133652372 +0000 UTC m=+5.577282649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.895081 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d30d5432d4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:13.134705364 +0000 UTC m=+5.578335621,LastTimestamp:2026-01-22 09:52:13.134705364 +0000 UTC m=+5.578335621,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.900011 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d3192b0479 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:13.333333113 +0000 UTC m=+5.776963400,LastTimestamp:2026-01-22 09:52:13.333333113 +0000 UTC m=+5.776963400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.904157 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d31a132d9d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:13.348547997 +0000 UTC m=+5.792178264,LastTimestamp:2026-01-22 09:52:13.348547997 +0000 UTC m=+5.792178264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.914223 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d31a235085 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:13.349605509 +0000 UTC m=+5.793235776,LastTimestamp:2026-01-22 09:52:13.349605509 +0000 UTC m=+5.793235776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.917961 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d327496abe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:13.570206398 +0000 UTC m=+6.013836655,LastTimestamp:2026-01-22 09:52:13.570206398 +0000 UTC m=+6.013836655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.934755 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d32811fb95 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:13.583350677 +0000 UTC m=+6.026980944,LastTimestamp:2026-01-22 09:52:13.583350677 +0000 UTC m=+6.026980944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.949371 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d32822444d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:13.584417869 +0000 UTC m=+6.028048146,LastTimestamp:2026-01-22 09:52:13.584417869 +0000 UTC m=+6.028048146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.955392 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d333470eda openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:13.771378394 +0000 UTC m=+6.215008661,LastTimestamp:2026-01-22 09:52:13.771378394 +0000 UTC m=+6.215008661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.959982 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d04d333e621eb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:13.781803499 +0000 UTC m=+6.225433766,LastTimestamp:2026-01-22 09:52:13.781803499 +0000 UTC m=+6.225433766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.975355 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 09:52:29 crc kubenswrapper[5101]: &Event{ObjectMeta:{kube-apiserver-crc.188d04d5af42c481 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 22 09:52:29 crc kubenswrapper[5101]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 09:52:29 crc kubenswrapper[5101]: Jan 22 09:52:29 crc kubenswrapper[5101]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:24.441406593 +0000 UTC m=+16.885036860,LastTimestamp:2026-01-22 09:52:24.441406593 +0000 UTC m=+16.885036860,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 09:52:29 crc kubenswrapper[5101]: > Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.986297 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d5af44b4d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:24.441533657 +0000 UTC m=+16.885163944,LastTimestamp:2026-01-22 09:52:24.441533657 +0000 UTC m=+16.885163944,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:29 crc kubenswrapper[5101]: E0122 09:52:29.994039 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d5af42c481\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 09:52:29 crc kubenswrapper[5101]: &Event{ObjectMeta:{kube-apiserver-crc.188d04d5af42c481 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 22 09:52:29 crc kubenswrapper[5101]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 09:52:29 crc kubenswrapper[5101]: Jan 22 09:52:29 crc kubenswrapper[5101]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:24.441406593 +0000 UTC m=+16.885036860,LastTimestamp:2026-01-22 09:52:24.456269652 +0000 UTC m=+16.899899919,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 09:52:29 crc kubenswrapper[5101]: > Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.001620 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d5af44b4d9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d5af44b4d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:24.441533657 +0000 UTC m=+16.885163944,LastTimestamp:2026-01-22 09:52:24.456372114 +0000 UTC m=+16.900002381,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.010301 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 22 09:52:30 crc kubenswrapper[5101]: &Event{ObjectMeta:{kube-controller-manager-crc.188d04d5e6cc1b4b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 22 09:52:30 crc kubenswrapper[5101]: body: Jan 22 09:52:30 crc kubenswrapper[5101]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:25.373154123 +0000 UTC m=+17.816784390,LastTimestamp:2026-01-22 09:52:25.373154123 +0000 UTC m=+17.816784390,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 09:52:30 crc kubenswrapper[5101]: > Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.015099 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d04d5e6ccdb4b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:25.373203275 +0000 UTC m=+17.816833542,LastTimestamp:2026-01-22 09:52:25.373203275 +0000 UTC m=+17.816833542,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.028892 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 09:52:30 crc kubenswrapper[5101]: &Event{ObjectMeta:{kube-apiserver-crc.188d04d6df8e3ce2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44020->192.168.126.11:17697: read: connection reset by peer Jan 22 09:52:30 crc kubenswrapper[5101]: body: Jan 22 09:52:30 crc kubenswrapper[5101]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:29.546626274 +0000 UTC m=+21.990256531,LastTimestamp:2026-01-22 09:52:29.546626274 +0000 UTC m=+21.990256531,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 09:52:30 crc kubenswrapper[5101]: > Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.035035 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 09:52:30 crc kubenswrapper[5101]: &Event{ObjectMeta:{kube-apiserver-crc.188d04d6df8e98f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44012->192.168.126.11:17697: read: connection reset by peer Jan 22 09:52:30 crc kubenswrapper[5101]: body: Jan 22 09:52:30 crc kubenswrapper[5101]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:29.546649845 +0000 UTC m=+21.990280112,LastTimestamp:2026-01-22 09:52:29.546649845 +0000 UTC m=+21.990280112,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 09:52:30 crc kubenswrapper[5101]: > Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.041832 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d6df8f0674 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44020->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:29.546677876 +0000 UTC m=+21.990308143,LastTimestamp:2026-01-22 09:52:29.546677876 +0000 UTC m=+21.990308143,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.046457 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d6df8f40ee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44012->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:29.546692846 +0000 UTC m=+21.990323123,LastTimestamp:2026-01-22 09:52:29.546692846 +0000 UTC m=+21.990323123,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.055069 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 09:52:30 crc kubenswrapper[5101]: &Event{ObjectMeta:{kube-apiserver-crc.188d04d6dfdb0b06 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44026->192.168.126.11:17697: read: connection reset by peer Jan 22 09:52:30 crc kubenswrapper[5101]: body: Jan 22 09:52:30 crc kubenswrapper[5101]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:29.551659782 +0000 UTC m=+21.995290049,LastTimestamp:2026-01-22 09:52:29.551659782 +0000 UTC m=+21.995290049,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 09:52:30 crc kubenswrapper[5101]: > Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.060847 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d6dfdc1a0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44026->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:29.551729164 +0000 UTC m=+21.995359461,LastTimestamp:2026-01-22 09:52:29.551729164 +0000 UTC m=+21.995359461,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.065326 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d2f4e1a1e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2f4e1a1e6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.724543974 +0000 UTC m=+5.168174241,LastTimestamp:2026-01-22 09:52:29.793995143 +0000 UTC m=+22.237625410,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.128569 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d3001351cd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d3001351cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.912349645 +0000 UTC m=+5.355979912,LastTimestamp:2026-01-22 09:52:30.124067506 +0000 UTC m=+22.567697763,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.217704 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d300958c89\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d300958c89 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.920884361 +0000 UTC m=+5.364514628,LastTimestamp:2026-01-22 09:52:30.212194338 +0000 UTC m=+22.655824615,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:30 crc kubenswrapper[5101]: I0122 09:52:30.426832 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:30 crc kubenswrapper[5101]: I0122 09:52:30.796339 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 09:52:30 crc kubenswrapper[5101]: I0122 09:52:30.798831 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"49a101ad02e2441780fc1f0702482e606525230f9404e7ee0164db1fbbdd9ed6"} Jan 22 09:52:30 crc kubenswrapper[5101]: I0122 09:52:30.799134 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:30 crc kubenswrapper[5101]: I0122 09:52:30.799900 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:30 crc kubenswrapper[5101]: I0122 09:52:30.799948 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:30 crc kubenswrapper[5101]: I0122 09:52:30.799967 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:30 crc kubenswrapper[5101]: E0122 09:52:30.800486 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:31 crc kubenswrapper[5101]: E0122 09:52:31.099246 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 09:52:31 crc kubenswrapper[5101]: I0122 09:52:31.431095 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:31 crc kubenswrapper[5101]: I0122 09:52:31.801864 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 09:52:31 crc kubenswrapper[5101]: I0122 09:52:31.802165 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 09:52:31 crc kubenswrapper[5101]: I0122 09:52:31.803429 5101 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="49a101ad02e2441780fc1f0702482e606525230f9404e7ee0164db1fbbdd9ed6" exitCode=255 Jan 22 09:52:31 crc kubenswrapper[5101]: I0122 09:52:31.803467 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"49a101ad02e2441780fc1f0702482e606525230f9404e7ee0164db1fbbdd9ed6"} Jan 22 09:52:31 crc kubenswrapper[5101]: I0122 09:52:31.803519 5101 scope.go:117] "RemoveContainer" containerID="923f3aad7922b02a55ac48193799b2470a8c483f127b466ecc45551f9735cb9b" Jan 22 09:52:31 crc kubenswrapper[5101]: I0122 09:52:31.803727 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:31 crc kubenswrapper[5101]: I0122 09:52:31.804364 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:31 crc kubenswrapper[5101]: I0122 09:52:31.804394 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:31 crc kubenswrapper[5101]: I0122 09:52:31.804403 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:31 crc kubenswrapper[5101]: E0122 09:52:31.804676 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:31 crc kubenswrapper[5101]: I0122 09:52:31.804956 5101 scope.go:117] "RemoveContainer" containerID="49a101ad02e2441780fc1f0702482e606525230f9404e7ee0164db1fbbdd9ed6" Jan 22 09:52:31 crc kubenswrapper[5101]: E0122 09:52:31.805413 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 09:52:31 crc kubenswrapper[5101]: E0122 09:52:31.810481 5101 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d7662c6f74 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:31.805140852 +0000 UTC m=+24.248771119,LastTimestamp:2026-01-22 09:52:31.805140852 +0000 UTC m=+24.248771119,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:32 crc kubenswrapper[5101]: I0122 09:52:32.376738 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:32 crc kubenswrapper[5101]: I0122 09:52:32.376965 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:32 crc kubenswrapper[5101]: I0122 09:52:32.377760 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:32 crc kubenswrapper[5101]: I0122 09:52:32.377886 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:32 crc kubenswrapper[5101]: I0122 09:52:32.378103 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:32 crc kubenswrapper[5101]: E0122 09:52:32.378574 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:32 crc kubenswrapper[5101]: I0122 09:52:32.380864 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:52:32 crc kubenswrapper[5101]: I0122 09:52:32.422183 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:32 crc kubenswrapper[5101]: I0122 09:52:32.807804 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 09:52:32 crc kubenswrapper[5101]: I0122 09:52:32.810348 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:32 crc kubenswrapper[5101]: I0122 09:52:32.810945 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:32 crc kubenswrapper[5101]: I0122 09:52:32.810982 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:32 crc kubenswrapper[5101]: I0122 09:52:32.810991 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:32 crc kubenswrapper[5101]: E0122 09:52:32.811348 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:33 crc kubenswrapper[5101]: I0122 09:52:33.418449 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:34 crc kubenswrapper[5101]: I0122 09:52:34.420770 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:35 crc kubenswrapper[5101]: E0122 09:52:35.025093 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 09:52:35 crc kubenswrapper[5101]: I0122 09:52:35.676646 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:35 crc kubenswrapper[5101]: I0122 09:52:35.846265 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:35 crc kubenswrapper[5101]: I0122 09:52:35.847586 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:35 crc kubenswrapper[5101]: I0122 09:52:35.847670 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:35 crc kubenswrapper[5101]: I0122 09:52:35.847686 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:35 crc kubenswrapper[5101]: I0122 09:52:35.847716 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:52:35 crc kubenswrapper[5101]: E0122 09:52:35.868375 5101 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 09:52:36 crc kubenswrapper[5101]: I0122 09:52:36.499579 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:37 crc kubenswrapper[5101]: I0122 09:52:37.419417 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:38 crc kubenswrapper[5101]: E0122 09:52:38.106301 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 09:52:38 crc kubenswrapper[5101]: I0122 09:52:38.210198 5101 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:38 crc kubenswrapper[5101]: I0122 09:52:38.210605 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:38 crc kubenswrapper[5101]: I0122 09:52:38.211704 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:38 crc kubenswrapper[5101]: I0122 09:52:38.211804 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:38 crc kubenswrapper[5101]: I0122 09:52:38.211818 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:38 crc kubenswrapper[5101]: E0122 09:52:38.212106 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:38 crc kubenswrapper[5101]: I0122 09:52:38.212328 5101 scope.go:117] "RemoveContainer" containerID="49a101ad02e2441780fc1f0702482e606525230f9404e7ee0164db1fbbdd9ed6" Jan 22 09:52:38 crc kubenswrapper[5101]: E0122 09:52:38.212528 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 09:52:38 crc kubenswrapper[5101]: E0122 09:52:38.217265 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d7662c6f74\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d7662c6f74 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:31.805140852 +0000 UTC m=+24.248771119,LastTimestamp:2026-01-22 09:52:38.212503054 +0000 UTC m=+30.656133321,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:38 crc kubenswrapper[5101]: I0122 09:52:38.421884 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:38 crc kubenswrapper[5101]: E0122 09:52:38.557828 5101 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:52:39 crc kubenswrapper[5101]: I0122 09:52:39.420896 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:40 crc kubenswrapper[5101]: I0122 09:52:40.419984 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:40 crc kubenswrapper[5101]: I0122 09:52:40.800501 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:40 crc kubenswrapper[5101]: I0122 09:52:40.800777 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:40 crc kubenswrapper[5101]: I0122 09:52:40.801625 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:40 crc kubenswrapper[5101]: I0122 09:52:40.801686 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:40 crc kubenswrapper[5101]: I0122 09:52:40.801697 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:40 crc kubenswrapper[5101]: E0122 09:52:40.802094 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:40 crc kubenswrapper[5101]: I0122 09:52:40.802396 5101 scope.go:117] "RemoveContainer" containerID="49a101ad02e2441780fc1f0702482e606525230f9404e7ee0164db1fbbdd9ed6" Jan 22 09:52:40 crc kubenswrapper[5101]: E0122 09:52:40.802620 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 09:52:40 crc kubenswrapper[5101]: E0122 09:52:40.808261 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d7662c6f74\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d7662c6f74 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:31.805140852 +0000 UTC m=+24.248771119,LastTimestamp:2026-01-22 09:52:40.802588854 +0000 UTC m=+33.246219121,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:41 crc kubenswrapper[5101]: I0122 09:52:41.420156 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:41 crc kubenswrapper[5101]: E0122 09:52:41.420341 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 09:52:41 crc kubenswrapper[5101]: E0122 09:52:41.852284 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 09:52:41 crc kubenswrapper[5101]: E0122 09:52:41.859950 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 09:52:42 crc kubenswrapper[5101]: I0122 09:52:42.420210 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:42 crc kubenswrapper[5101]: I0122 09:52:42.869140 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:42 crc kubenswrapper[5101]: I0122 09:52:42.870081 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:42 crc kubenswrapper[5101]: I0122 09:52:42.870221 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:42 crc kubenswrapper[5101]: I0122 09:52:42.870325 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:42 crc kubenswrapper[5101]: I0122 09:52:42.870459 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:52:42 crc kubenswrapper[5101]: E0122 09:52:42.878490 5101 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 09:52:43 crc kubenswrapper[5101]: I0122 09:52:43.420093 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:44 crc kubenswrapper[5101]: I0122 09:52:44.420363 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:45 crc kubenswrapper[5101]: E0122 09:52:45.110523 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 09:52:45 crc kubenswrapper[5101]: I0122 09:52:45.422192 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:46 crc kubenswrapper[5101]: I0122 09:52:46.419106 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:47 crc kubenswrapper[5101]: I0122 09:52:47.419000 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:48 crc kubenswrapper[5101]: I0122 09:52:48.421220 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:48 crc kubenswrapper[5101]: E0122 09:52:48.558177 5101 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:52:49 crc kubenswrapper[5101]: I0122 09:52:49.418932 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:49 crc kubenswrapper[5101]: I0122 09:52:49.879589 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:49 crc kubenswrapper[5101]: I0122 09:52:49.880796 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:49 crc kubenswrapper[5101]: I0122 09:52:49.880964 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:49 crc kubenswrapper[5101]: I0122 09:52:49.881105 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:49 crc kubenswrapper[5101]: I0122 09:52:49.881218 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:52:49 crc kubenswrapper[5101]: E0122 09:52:49.892901 5101 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 09:52:50 crc kubenswrapper[5101]: E0122 09:52:50.092500 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 09:52:50 crc kubenswrapper[5101]: I0122 09:52:50.419676 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:51 crc kubenswrapper[5101]: I0122 09:52:51.418976 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:52 crc kubenswrapper[5101]: E0122 09:52:52.116407 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 09:52:52 crc kubenswrapper[5101]: I0122 09:52:52.420625 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:53 crc kubenswrapper[5101]: I0122 09:52:53.420471 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:54 crc kubenswrapper[5101]: I0122 09:52:54.420067 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:54 crc kubenswrapper[5101]: I0122 09:52:54.527818 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:54 crc kubenswrapper[5101]: I0122 09:52:54.528900 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:54 crc kubenswrapper[5101]: I0122 09:52:54.528963 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:54 crc kubenswrapper[5101]: I0122 09:52:54.528978 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:54 crc kubenswrapper[5101]: E0122 09:52:54.529454 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:54 crc kubenswrapper[5101]: I0122 09:52:54.529769 5101 scope.go:117] "RemoveContainer" containerID="49a101ad02e2441780fc1f0702482e606525230f9404e7ee0164db1fbbdd9ed6" Jan 22 09:52:54 crc kubenswrapper[5101]: E0122 09:52:54.535660 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d2f4e1a1e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d2f4e1a1e6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.724543974 +0000 UTC m=+5.168174241,LastTimestamp:2026-01-22 09:52:54.5310469 +0000 UTC m=+46.974677157,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:54 crc kubenswrapper[5101]: E0122 09:52:54.881285 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d3001351cd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d3001351cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.912349645 +0000 UTC m=+5.355979912,LastTimestamp:2026-01-22 09:52:54.875685192 +0000 UTC m=+47.319315479,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:55 crc kubenswrapper[5101]: E0122 09:52:55.364760 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d300958c89\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d300958c89 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:12.920884361 +0000 UTC m=+5.364514628,LastTimestamp:2026-01-22 09:52:55.35910967 +0000 UTC m=+47.802739937,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:55 crc kubenswrapper[5101]: I0122 09:52:55.419835 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:55 crc kubenswrapper[5101]: E0122 09:52:55.617281 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 09:52:55 crc kubenswrapper[5101]: I0122 09:52:55.869754 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 09:52:55 crc kubenswrapper[5101]: I0122 09:52:55.879169 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6ced2bbe22e40b7737b1db1b9d4acdd9fc986701e6931cd5f01b1133ad91482b"} Jan 22 09:52:55 crc kubenswrapper[5101]: I0122 09:52:55.879416 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:55 crc kubenswrapper[5101]: I0122 09:52:55.880103 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:55 crc kubenswrapper[5101]: I0122 09:52:55.880132 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:55 crc kubenswrapper[5101]: I0122 09:52:55.880142 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:55 crc kubenswrapper[5101]: E0122 09:52:55.880439 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:56 crc kubenswrapper[5101]: I0122 09:52:56.419369 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:56 crc kubenswrapper[5101]: I0122 09:52:56.893388 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:56 crc kubenswrapper[5101]: I0122 09:52:56.895169 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:56 crc kubenswrapper[5101]: I0122 09:52:56.895218 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:56 crc kubenswrapper[5101]: I0122 09:52:56.895227 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:56 crc kubenswrapper[5101]: I0122 09:52:56.895253 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:52:56 crc kubenswrapper[5101]: E0122 09:52:56.903784 5101 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 09:52:57 crc kubenswrapper[5101]: E0122 09:52:57.261564 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 09:52:57 crc kubenswrapper[5101]: I0122 09:52:57.420718 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:57 crc kubenswrapper[5101]: I0122 09:52:57.895821 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 09:52:57 crc kubenswrapper[5101]: I0122 09:52:57.896594 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 09:52:57 crc kubenswrapper[5101]: I0122 09:52:57.899175 5101 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6ced2bbe22e40b7737b1db1b9d4acdd9fc986701e6931cd5f01b1133ad91482b" exitCode=255 Jan 22 09:52:57 crc kubenswrapper[5101]: I0122 09:52:57.899254 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"6ced2bbe22e40b7737b1db1b9d4acdd9fc986701e6931cd5f01b1133ad91482b"} Jan 22 09:52:57 crc kubenswrapper[5101]: I0122 09:52:57.899318 5101 scope.go:117] "RemoveContainer" containerID="49a101ad02e2441780fc1f0702482e606525230f9404e7ee0164db1fbbdd9ed6" Jan 22 09:52:57 crc kubenswrapper[5101]: I0122 09:52:57.899664 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:57 crc kubenswrapper[5101]: I0122 09:52:57.900464 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:57 crc kubenswrapper[5101]: I0122 09:52:57.900524 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:57 crc kubenswrapper[5101]: I0122 09:52:57.900542 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:57 crc kubenswrapper[5101]: E0122 09:52:57.901035 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:57 crc kubenswrapper[5101]: I0122 09:52:57.901453 5101 scope.go:117] "RemoveContainer" containerID="6ced2bbe22e40b7737b1db1b9d4acdd9fc986701e6931cd5f01b1133ad91482b" Jan 22 09:52:57 crc kubenswrapper[5101]: E0122 09:52:57.901965 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 09:52:57 crc kubenswrapper[5101]: E0122 09:52:57.908466 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d7662c6f74\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d7662c6f74 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:31.805140852 +0000 UTC m=+24.248771119,LastTimestamp:2026-01-22 09:52:57.901913551 +0000 UTC m=+50.345543828,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:58 crc kubenswrapper[5101]: I0122 09:52:58.210498 5101 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:52:58 crc kubenswrapper[5101]: I0122 09:52:58.420289 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:52:58 crc kubenswrapper[5101]: E0122 09:52:58.558590 5101 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:52:58 crc kubenswrapper[5101]: I0122 09:52:58.902735 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 09:52:58 crc kubenswrapper[5101]: I0122 09:52:58.904625 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:52:58 crc kubenswrapper[5101]: I0122 09:52:58.905156 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:52:58 crc kubenswrapper[5101]: I0122 09:52:58.905203 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:52:58 crc kubenswrapper[5101]: I0122 09:52:58.905215 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:52:58 crc kubenswrapper[5101]: E0122 09:52:58.905675 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:52:58 crc kubenswrapper[5101]: I0122 09:52:58.905956 5101 scope.go:117] "RemoveContainer" containerID="6ced2bbe22e40b7737b1db1b9d4acdd9fc986701e6931cd5f01b1133ad91482b" Jan 22 09:52:58 crc kubenswrapper[5101]: E0122 09:52:58.906185 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 09:52:58 crc kubenswrapper[5101]: E0122 09:52:58.910581 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d7662c6f74\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d7662c6f74 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:31.805140852 +0000 UTC m=+24.248771119,LastTimestamp:2026-01-22 09:52:58.906148743 +0000 UTC m=+51.349779010,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:52:59 crc kubenswrapper[5101]: E0122 09:52:59.122404 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 09:52:59 crc kubenswrapper[5101]: I0122 09:52:59.420300 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:00 crc kubenswrapper[5101]: I0122 09:53:00.417564 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:01 crc kubenswrapper[5101]: I0122 09:53:01.420107 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:02 crc kubenswrapper[5101]: I0122 09:53:02.420297 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:03 crc kubenswrapper[5101]: I0122 09:53:03.422511 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:03 crc kubenswrapper[5101]: I0122 09:53:03.904262 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:03 crc kubenswrapper[5101]: I0122 09:53:03.905306 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:03 crc kubenswrapper[5101]: I0122 09:53:03.905361 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:03 crc kubenswrapper[5101]: I0122 09:53:03.905376 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:03 crc kubenswrapper[5101]: I0122 09:53:03.905408 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:53:03 crc kubenswrapper[5101]: E0122 09:53:03.915873 5101 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 09:53:04 crc kubenswrapper[5101]: I0122 09:53:04.420156 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:04 crc kubenswrapper[5101]: I0122 09:53:04.697701 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:53:04 crc kubenswrapper[5101]: I0122 09:53:04.703522 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:04 crc kubenswrapper[5101]: I0122 09:53:04.704633 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:04 crc kubenswrapper[5101]: I0122 09:53:04.704712 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:04 crc kubenswrapper[5101]: I0122 09:53:04.704734 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:04 crc kubenswrapper[5101]: E0122 09:53:04.705103 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:53:05 crc kubenswrapper[5101]: I0122 09:53:05.419934 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:05 crc kubenswrapper[5101]: I0122 09:53:05.879871 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:53:05 crc kubenswrapper[5101]: I0122 09:53:05.880199 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:05 crc kubenswrapper[5101]: I0122 09:53:05.881752 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:05 crc kubenswrapper[5101]: I0122 09:53:05.881865 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:05 crc kubenswrapper[5101]: I0122 09:53:05.881887 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:05 crc kubenswrapper[5101]: E0122 09:53:05.882576 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:53:05 crc kubenswrapper[5101]: I0122 09:53:05.883047 5101 scope.go:117] "RemoveContainer" containerID="6ced2bbe22e40b7737b1db1b9d4acdd9fc986701e6931cd5f01b1133ad91482b" Jan 22 09:53:05 crc kubenswrapper[5101]: E0122 09:53:05.883399 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 09:53:05 crc kubenswrapper[5101]: E0122 09:53:05.889409 5101 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d04d7662c6f74\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d04d7662c6f74 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:52:31.805140852 +0000 UTC m=+24.248771119,LastTimestamp:2026-01-22 09:53:05.883346587 +0000 UTC m=+58.326976854,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:53:05 crc kubenswrapper[5101]: E0122 09:53:05.906083 5101 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 09:53:06 crc kubenswrapper[5101]: E0122 09:53:06.129087 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 09:53:06 crc kubenswrapper[5101]: I0122 09:53:06.420650 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:07 crc kubenswrapper[5101]: I0122 09:53:07.421663 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:08 crc kubenswrapper[5101]: I0122 09:53:08.420064 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:08 crc kubenswrapper[5101]: E0122 09:53:08.558909 5101 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:53:09 crc kubenswrapper[5101]: I0122 09:53:09.418035 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:10 crc kubenswrapper[5101]: I0122 09:53:10.422957 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:10 crc kubenswrapper[5101]: I0122 09:53:10.916981 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:10 crc kubenswrapper[5101]: I0122 09:53:10.918170 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:10 crc kubenswrapper[5101]: I0122 09:53:10.918219 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:10 crc kubenswrapper[5101]: I0122 09:53:10.918230 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:10 crc kubenswrapper[5101]: I0122 09:53:10.918259 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:53:10 crc kubenswrapper[5101]: E0122 09:53:10.927344 5101 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 09:53:11 crc kubenswrapper[5101]: I0122 09:53:11.420238 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:12 crc kubenswrapper[5101]: I0122 09:53:12.419847 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:13 crc kubenswrapper[5101]: E0122 09:53:13.134869 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 09:53:13 crc kubenswrapper[5101]: I0122 09:53:13.419070 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:14 crc kubenswrapper[5101]: I0122 09:53:14.420888 5101 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 09:53:14 crc kubenswrapper[5101]: I0122 09:53:14.713312 5101 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-xsngr" Jan 22 09:53:14 crc kubenswrapper[5101]: I0122 09:53:14.720882 5101 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-xsngr" Jan 22 09:53:14 crc kubenswrapper[5101]: I0122 09:53:14.736543 5101 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 09:53:15 crc kubenswrapper[5101]: I0122 09:53:15.158182 5101 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 09:53:15 crc kubenswrapper[5101]: I0122 09:53:15.722676 5101 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-21 09:48:14 +0000 UTC" deadline="2026-02-17 14:43:40.364168781 +0000 UTC" Jan 22 09:53:15 crc kubenswrapper[5101]: I0122 09:53:15.722752 5101 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="628h50m24.641423981s" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.927958 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.929147 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.929194 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.929207 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.929345 5101 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.937858 5101 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.938168 5101 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 22 09:53:17 crc kubenswrapper[5101]: E0122 09:53:17.938195 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.940685 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.940747 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.940762 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.940780 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.940799 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:17Z","lastTransitionTime":"2026-01-22T09:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:17 crc kubenswrapper[5101]: E0122 09:53:17.949474 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae597ad9-e613-4a07-817c-9064cdd0d814\\\",\\\"systemUUID\\\":\\\"ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.955888 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.955924 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.955934 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.955949 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.955960 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:17Z","lastTransitionTime":"2026-01-22T09:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:17 crc kubenswrapper[5101]: E0122 09:53:17.965953 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae597ad9-e613-4a07-817c-9064cdd0d814\\\",\\\"systemUUID\\\":\\\"ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.974407 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.974469 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.974483 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.974498 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.974508 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:17Z","lastTransitionTime":"2026-01-22T09:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:17 crc kubenswrapper[5101]: E0122 09:53:17.983601 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae597ad9-e613-4a07-817c-9064cdd0d814\\\",\\\"systemUUID\\\":\\\"ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.990850 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.990892 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.990904 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.990918 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:17 crc kubenswrapper[5101]: I0122 09:53:17.990927 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:17Z","lastTransitionTime":"2026-01-22T09:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:17 crc kubenswrapper[5101]: E0122 09:53:17.999140 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae597ad9-e613-4a07-817c-9064cdd0d814\\\",\\\"systemUUID\\\":\\\"ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:17 crc kubenswrapper[5101]: E0122 09:53:17.999287 5101 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:53:17 crc kubenswrapper[5101]: E0122 09:53:17.999310 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:18 crc kubenswrapper[5101]: E0122 09:53:18.099889 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:18 crc kubenswrapper[5101]: E0122 09:53:18.200723 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:18 crc kubenswrapper[5101]: E0122 09:53:18.301762 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:18 crc kubenswrapper[5101]: E0122 09:53:18.402146 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:18 crc kubenswrapper[5101]: E0122 09:53:18.502288 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:18 crc kubenswrapper[5101]: E0122 09:53:18.559788 5101 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:53:18 crc kubenswrapper[5101]: E0122 09:53:18.602602 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:18 crc kubenswrapper[5101]: E0122 09:53:18.702860 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:18 crc kubenswrapper[5101]: E0122 09:53:18.803179 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:18 crc kubenswrapper[5101]: E0122 09:53:18.904237 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:19 crc kubenswrapper[5101]: E0122 09:53:19.005087 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:19 crc kubenswrapper[5101]: E0122 09:53:19.106189 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:19 crc kubenswrapper[5101]: E0122 09:53:19.206932 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:19 crc kubenswrapper[5101]: E0122 09:53:19.308136 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:19 crc kubenswrapper[5101]: E0122 09:53:19.409582 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:19 crc kubenswrapper[5101]: E0122 09:53:19.510627 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:19 crc kubenswrapper[5101]: I0122 09:53:19.528174 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:19 crc kubenswrapper[5101]: I0122 09:53:19.529219 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:19 crc kubenswrapper[5101]: I0122 09:53:19.529258 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:19 crc kubenswrapper[5101]: I0122 09:53:19.529271 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:19 crc kubenswrapper[5101]: E0122 09:53:19.529743 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:53:19 crc kubenswrapper[5101]: I0122 09:53:19.530015 5101 scope.go:117] "RemoveContainer" containerID="6ced2bbe22e40b7737b1db1b9d4acdd9fc986701e6931cd5f01b1133ad91482b" Jan 22 09:53:19 crc kubenswrapper[5101]: E0122 09:53:19.611893 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:19 crc kubenswrapper[5101]: E0122 09:53:19.712921 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:19 crc kubenswrapper[5101]: E0122 09:53:19.814057 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:19 crc kubenswrapper[5101]: E0122 09:53:19.914529 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:19 crc kubenswrapper[5101]: I0122 09:53:19.953714 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 09:53:19 crc kubenswrapper[5101]: I0122 09:53:19.955784 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf"} Jan 22 09:53:19 crc kubenswrapper[5101]: I0122 09:53:19.956032 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:19 crc kubenswrapper[5101]: I0122 09:53:19.956664 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:19 crc kubenswrapper[5101]: I0122 09:53:19.956704 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:19 crc kubenswrapper[5101]: I0122 09:53:19.956717 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:19 crc kubenswrapper[5101]: E0122 09:53:19.957149 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:53:20 crc kubenswrapper[5101]: E0122 09:53:20.015463 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:20 crc kubenswrapper[5101]: E0122 09:53:20.116405 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:20 crc kubenswrapper[5101]: E0122 09:53:20.217169 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:20 crc kubenswrapper[5101]: E0122 09:53:20.318327 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:20 crc kubenswrapper[5101]: E0122 09:53:20.419216 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:20 crc kubenswrapper[5101]: E0122 09:53:20.520281 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:20 crc kubenswrapper[5101]: E0122 09:53:20.621017 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:20 crc kubenswrapper[5101]: E0122 09:53:20.721109 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:20 crc kubenswrapper[5101]: E0122 09:53:20.822218 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:20 crc kubenswrapper[5101]: E0122 09:53:20.922400 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:20 crc kubenswrapper[5101]: I0122 09:53:20.960311 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 09:53:20 crc kubenswrapper[5101]: I0122 09:53:20.960698 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 09:53:20 crc kubenswrapper[5101]: I0122 09:53:20.962044 5101 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf" exitCode=255 Jan 22 09:53:20 crc kubenswrapper[5101]: I0122 09:53:20.962087 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf"} Jan 22 09:53:20 crc kubenswrapper[5101]: I0122 09:53:20.962121 5101 scope.go:117] "RemoveContainer" containerID="6ced2bbe22e40b7737b1db1b9d4acdd9fc986701e6931cd5f01b1133ad91482b" Jan 22 09:53:20 crc kubenswrapper[5101]: I0122 09:53:20.962330 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:20 crc kubenswrapper[5101]: I0122 09:53:20.962932 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:20 crc kubenswrapper[5101]: I0122 09:53:20.962991 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:20 crc kubenswrapper[5101]: I0122 09:53:20.963003 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:20 crc kubenswrapper[5101]: E0122 09:53:20.963409 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:53:20 crc kubenswrapper[5101]: I0122 09:53:20.963641 5101 scope.go:117] "RemoveContainer" containerID="f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf" Jan 22 09:53:20 crc kubenswrapper[5101]: E0122 09:53:20.963832 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 09:53:21 crc kubenswrapper[5101]: E0122 09:53:21.022918 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:21 crc kubenswrapper[5101]: E0122 09:53:21.123688 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:21 crc kubenswrapper[5101]: E0122 09:53:21.223859 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:21 crc kubenswrapper[5101]: E0122 09:53:21.324919 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:21 crc kubenswrapper[5101]: E0122 09:53:21.425990 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:21 crc kubenswrapper[5101]: E0122 09:53:21.527166 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:21 crc kubenswrapper[5101]: E0122 09:53:21.627540 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:21 crc kubenswrapper[5101]: E0122 09:53:21.727930 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:21 crc kubenswrapper[5101]: E0122 09:53:21.828131 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:21 crc kubenswrapper[5101]: E0122 09:53:21.928243 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:21 crc kubenswrapper[5101]: I0122 09:53:21.967162 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 09:53:22 crc kubenswrapper[5101]: E0122 09:53:22.028758 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:22 crc kubenswrapper[5101]: E0122 09:53:22.129297 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:22 crc kubenswrapper[5101]: E0122 09:53:22.230356 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:22 crc kubenswrapper[5101]: E0122 09:53:22.331464 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:22 crc kubenswrapper[5101]: E0122 09:53:22.432038 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:22 crc kubenswrapper[5101]: E0122 09:53:22.532557 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:22 crc kubenswrapper[5101]: E0122 09:53:22.633147 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:22 crc kubenswrapper[5101]: E0122 09:53:22.733776 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:22 crc kubenswrapper[5101]: E0122 09:53:22.834306 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:22 crc kubenswrapper[5101]: E0122 09:53:22.934703 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:23 crc kubenswrapper[5101]: E0122 09:53:23.035652 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:23 crc kubenswrapper[5101]: E0122 09:53:23.136009 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:23 crc kubenswrapper[5101]: E0122 09:53:23.236863 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:23 crc kubenswrapper[5101]: E0122 09:53:23.337755 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:23 crc kubenswrapper[5101]: E0122 09:53:23.437934 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:23 crc kubenswrapper[5101]: E0122 09:53:23.538728 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:23 crc kubenswrapper[5101]: E0122 09:53:23.639708 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:23 crc kubenswrapper[5101]: E0122 09:53:23.740279 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:23 crc kubenswrapper[5101]: E0122 09:53:23.840683 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:23 crc kubenswrapper[5101]: E0122 09:53:23.941814 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:24 crc kubenswrapper[5101]: E0122 09:53:24.042085 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:24 crc kubenswrapper[5101]: E0122 09:53:24.143001 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:24 crc kubenswrapper[5101]: E0122 09:53:24.243296 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:24 crc kubenswrapper[5101]: E0122 09:53:24.343767 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:24 crc kubenswrapper[5101]: I0122 09:53:24.364464 5101 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 09:53:24 crc kubenswrapper[5101]: E0122 09:53:24.444033 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:24 crc kubenswrapper[5101]: E0122 09:53:24.544648 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:24 crc kubenswrapper[5101]: E0122 09:53:24.645686 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:24 crc kubenswrapper[5101]: E0122 09:53:24.746072 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:24 crc kubenswrapper[5101]: E0122 09:53:24.846226 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:24 crc kubenswrapper[5101]: E0122 09:53:24.947287 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:25 crc kubenswrapper[5101]: E0122 09:53:25.048053 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:25 crc kubenswrapper[5101]: E0122 09:53:25.149022 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:25 crc kubenswrapper[5101]: E0122 09:53:25.250100 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:25 crc kubenswrapper[5101]: E0122 09:53:25.351007 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:25 crc kubenswrapper[5101]: E0122 09:53:25.451579 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:25 crc kubenswrapper[5101]: E0122 09:53:25.551893 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:25 crc kubenswrapper[5101]: E0122 09:53:25.653089 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:25 crc kubenswrapper[5101]: E0122 09:53:25.753902 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:25 crc kubenswrapper[5101]: E0122 09:53:25.854933 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:25 crc kubenswrapper[5101]: E0122 09:53:25.955454 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:26 crc kubenswrapper[5101]: E0122 09:53:26.056458 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:26 crc kubenswrapper[5101]: E0122 09:53:26.158022 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:26 crc kubenswrapper[5101]: E0122 09:53:26.259109 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:26 crc kubenswrapper[5101]: E0122 09:53:26.359583 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:26 crc kubenswrapper[5101]: E0122 09:53:26.460576 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:26 crc kubenswrapper[5101]: E0122 09:53:26.561505 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:26 crc kubenswrapper[5101]: E0122 09:53:26.662570 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:26 crc kubenswrapper[5101]: E0122 09:53:26.763352 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:26 crc kubenswrapper[5101]: E0122 09:53:26.864218 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:26 crc kubenswrapper[5101]: E0122 09:53:26.964852 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:27 crc kubenswrapper[5101]: E0122 09:53:27.065929 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:27 crc kubenswrapper[5101]: E0122 09:53:27.166305 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:27 crc kubenswrapper[5101]: E0122 09:53:27.266410 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:27 crc kubenswrapper[5101]: E0122 09:53:27.366868 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:27 crc kubenswrapper[5101]: E0122 09:53:27.467964 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:27 crc kubenswrapper[5101]: E0122 09:53:27.568669 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:27 crc kubenswrapper[5101]: E0122 09:53:27.669501 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:27 crc kubenswrapper[5101]: E0122 09:53:27.770190 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:27 crc kubenswrapper[5101]: E0122 09:53:27.870979 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:27 crc kubenswrapper[5101]: E0122 09:53:27.971588 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.067515 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.071990 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.072048 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.072062 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.072080 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.072098 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:28Z","lastTransitionTime":"2026-01-22T09:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.082740 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae597ad9-e613-4a07-817c-9064cdd0d814\\\",\\\"systemUUID\\\":\\\"ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.086277 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.086324 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.086334 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.086349 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.086359 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:28Z","lastTransitionTime":"2026-01-22T09:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.097767 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae597ad9-e613-4a07-817c-9064cdd0d814\\\",\\\"systemUUID\\\":\\\"ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.101524 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.101571 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.101581 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.101596 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.101606 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:28Z","lastTransitionTime":"2026-01-22T09:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.110852 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae597ad9-e613-4a07-817c-9064cdd0d814\\\",\\\"systemUUID\\\":\\\"ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.113884 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.113936 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.113948 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.113969 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.113983 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:28Z","lastTransitionTime":"2026-01-22T09:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.122584 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae597ad9-e613-4a07-817c-9064cdd0d814\\\",\\\"systemUUID\\\":\\\"ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.122761 5101 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.122796 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.210223 5101 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.210522 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.211447 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.211500 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.211511 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.211994 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:53:28 crc kubenswrapper[5101]: I0122 09:53:28.212273 5101 scope.go:117] "RemoveContainer" containerID="f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.212575 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.223116 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.324047 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.425151 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.525697 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.560622 5101 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.626809 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.727415 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.827611 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:28 crc kubenswrapper[5101]: E0122 09:53:28.927809 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.028541 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.129464 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.229825 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.330968 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.431644 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:29 crc kubenswrapper[5101]: I0122 09:53:29.527599 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:29 crc kubenswrapper[5101]: I0122 09:53:29.528608 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:29 crc kubenswrapper[5101]: I0122 09:53:29.528675 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:29 crc kubenswrapper[5101]: I0122 09:53:29.528689 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.529134 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.532303 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.633356 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.733474 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.833827 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.934263 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:29 crc kubenswrapper[5101]: I0122 09:53:29.956558 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:53:29 crc kubenswrapper[5101]: I0122 09:53:29.956884 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:29 crc kubenswrapper[5101]: I0122 09:53:29.957941 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:29 crc kubenswrapper[5101]: I0122 09:53:29.958048 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:29 crc kubenswrapper[5101]: I0122 09:53:29.958119 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.958569 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:53:29 crc kubenswrapper[5101]: I0122 09:53:29.958913 5101 scope.go:117] "RemoveContainer" containerID="f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf" Jan 22 09:53:29 crc kubenswrapper[5101]: E0122 09:53:29.959171 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 09:53:30 crc kubenswrapper[5101]: E0122 09:53:30.035194 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:30 crc kubenswrapper[5101]: E0122 09:53:30.135669 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:30 crc kubenswrapper[5101]: E0122 09:53:30.235849 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:30 crc kubenswrapper[5101]: E0122 09:53:30.337090 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:30 crc kubenswrapper[5101]: E0122 09:53:30.437740 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:30 crc kubenswrapper[5101]: E0122 09:53:30.538676 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:30 crc kubenswrapper[5101]: E0122 09:53:30.639515 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:30 crc kubenswrapper[5101]: E0122 09:53:30.740593 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:30 crc kubenswrapper[5101]: E0122 09:53:30.841681 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:30 crc kubenswrapper[5101]: E0122 09:53:30.941966 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:31 crc kubenswrapper[5101]: E0122 09:53:31.042322 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:31 crc kubenswrapper[5101]: E0122 09:53:31.143403 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:31 crc kubenswrapper[5101]: E0122 09:53:31.244502 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:31 crc kubenswrapper[5101]: E0122 09:53:31.344631 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:31 crc kubenswrapper[5101]: E0122 09:53:31.445440 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:31 crc kubenswrapper[5101]: I0122 09:53:31.528534 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:31 crc kubenswrapper[5101]: I0122 09:53:31.529446 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:31 crc kubenswrapper[5101]: I0122 09:53:31.529505 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:31 crc kubenswrapper[5101]: I0122 09:53:31.529524 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:31 crc kubenswrapper[5101]: E0122 09:53:31.530129 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:53:31 crc kubenswrapper[5101]: E0122 09:53:31.546287 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:31 crc kubenswrapper[5101]: E0122 09:53:31.646836 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:31 crc kubenswrapper[5101]: E0122 09:53:31.747806 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:31 crc kubenswrapper[5101]: E0122 09:53:31.848756 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:31 crc kubenswrapper[5101]: E0122 09:53:31.949834 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:32 crc kubenswrapper[5101]: E0122 09:53:32.050295 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:32 crc kubenswrapper[5101]: E0122 09:53:32.151337 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:32 crc kubenswrapper[5101]: E0122 09:53:32.251628 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:32 crc kubenswrapper[5101]: E0122 09:53:32.352079 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:32 crc kubenswrapper[5101]: E0122 09:53:32.452647 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:32 crc kubenswrapper[5101]: E0122 09:53:32.553203 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:32 crc kubenswrapper[5101]: E0122 09:53:32.654317 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:32 crc kubenswrapper[5101]: E0122 09:53:32.755263 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:32 crc kubenswrapper[5101]: E0122 09:53:32.856169 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:32 crc kubenswrapper[5101]: E0122 09:53:32.956792 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:33 crc kubenswrapper[5101]: E0122 09:53:33.057314 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:33 crc kubenswrapper[5101]: E0122 09:53:33.157809 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:33 crc kubenswrapper[5101]: E0122 09:53:33.258573 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:33 crc kubenswrapper[5101]: E0122 09:53:33.359718 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:33 crc kubenswrapper[5101]: E0122 09:53:33.460870 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:33 crc kubenswrapper[5101]: E0122 09:53:33.561823 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:33 crc kubenswrapper[5101]: E0122 09:53:33.662115 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:33 crc kubenswrapper[5101]: E0122 09:53:33.762702 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:33 crc kubenswrapper[5101]: E0122 09:53:33.863073 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:33 crc kubenswrapper[5101]: E0122 09:53:33.964092 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:34 crc kubenswrapper[5101]: E0122 09:53:34.064740 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:34 crc kubenswrapper[5101]: E0122 09:53:34.165507 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:34 crc kubenswrapper[5101]: I0122 09:53:34.233850 5101 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 09:53:34 crc kubenswrapper[5101]: E0122 09:53:34.265694 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:34 crc kubenswrapper[5101]: E0122 09:53:34.366389 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:34 crc kubenswrapper[5101]: E0122 09:53:34.467393 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:34 crc kubenswrapper[5101]: E0122 09:53:34.568531 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:34 crc kubenswrapper[5101]: E0122 09:53:34.669591 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:34 crc kubenswrapper[5101]: E0122 09:53:34.770229 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:34 crc kubenswrapper[5101]: E0122 09:53:34.871340 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:34 crc kubenswrapper[5101]: E0122 09:53:34.971717 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:35 crc kubenswrapper[5101]: E0122 09:53:35.072257 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:35 crc kubenswrapper[5101]: E0122 09:53:35.172920 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:35 crc kubenswrapper[5101]: E0122 09:53:35.273983 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:35 crc kubenswrapper[5101]: E0122 09:53:35.374670 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:35 crc kubenswrapper[5101]: E0122 09:53:35.475546 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:35 crc kubenswrapper[5101]: E0122 09:53:35.577318 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:35 crc kubenswrapper[5101]: E0122 09:53:35.677844 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:35 crc kubenswrapper[5101]: E0122 09:53:35.778106 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:35 crc kubenswrapper[5101]: E0122 09:53:35.879297 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:35 crc kubenswrapper[5101]: E0122 09:53:35.980217 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:36 crc kubenswrapper[5101]: E0122 09:53:36.080662 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:36 crc kubenswrapper[5101]: E0122 09:53:36.181083 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:36 crc kubenswrapper[5101]: E0122 09:53:36.282031 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:36 crc kubenswrapper[5101]: E0122 09:53:36.382731 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:36 crc kubenswrapper[5101]: E0122 09:53:36.483852 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:36 crc kubenswrapper[5101]: E0122 09:53:36.584784 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:36 crc kubenswrapper[5101]: E0122 09:53:36.685311 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:36 crc kubenswrapper[5101]: E0122 09:53:36.785824 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:36 crc kubenswrapper[5101]: E0122 09:53:36.886852 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:36 crc kubenswrapper[5101]: E0122 09:53:36.987951 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:37 crc kubenswrapper[5101]: E0122 09:53:37.088783 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:37 crc kubenswrapper[5101]: E0122 09:53:37.189137 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:37 crc kubenswrapper[5101]: E0122 09:53:37.289278 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:37 crc kubenswrapper[5101]: E0122 09:53:37.390363 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:37 crc kubenswrapper[5101]: E0122 09:53:37.491360 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:37 crc kubenswrapper[5101]: E0122 09:53:37.592270 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:37 crc kubenswrapper[5101]: E0122 09:53:37.692767 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:37 crc kubenswrapper[5101]: E0122 09:53:37.793960 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:37 crc kubenswrapper[5101]: E0122 09:53:37.895037 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:37 crc kubenswrapper[5101]: E0122 09:53:37.996008 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.096995 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.198068 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.299616 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.345362 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.349780 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.349843 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.349854 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.349868 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.349878 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:38Z","lastTransitionTime":"2026-01-22T09:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.359486 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae597ad9-e613-4a07-817c-9064cdd0d814\\\",\\\"systemUUID\\\":\\\"ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.363366 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.363450 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.363463 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.363477 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.363486 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:38Z","lastTransitionTime":"2026-01-22T09:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.372054 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae597ad9-e613-4a07-817c-9064cdd0d814\\\",\\\"systemUUID\\\":\\\"ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.376151 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.376220 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.376240 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.376264 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.376281 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:38Z","lastTransitionTime":"2026-01-22T09:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.384961 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae597ad9-e613-4a07-817c-9064cdd0d814\\\",\\\"systemUUID\\\":\\\"ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.387922 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.387996 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.388027 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.388057 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:38 crc kubenswrapper[5101]: I0122 09:53:38.388087 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:38Z","lastTransitionTime":"2026-01-22T09:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.408149 5101 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae597ad9-e613-4a07-817c-9064cdd0d814\\\",\\\"systemUUID\\\":\\\"ae4e2b0b-7c9a-4831-9c84-cfa14aa36ec7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.408332 5101 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.408371 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.508586 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.561463 5101 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.609610 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.710530 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.811450 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:38 crc kubenswrapper[5101]: E0122 09:53:38.912576 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:39 crc kubenswrapper[5101]: E0122 09:53:39.013394 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:39 crc kubenswrapper[5101]: E0122 09:53:39.114404 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:39 crc kubenswrapper[5101]: E0122 09:53:39.215317 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:39 crc kubenswrapper[5101]: E0122 09:53:39.316265 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:39 crc kubenswrapper[5101]: E0122 09:53:39.416383 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:39 crc kubenswrapper[5101]: E0122 09:53:39.517307 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:39 crc kubenswrapper[5101]: E0122 09:53:39.617590 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:39 crc kubenswrapper[5101]: E0122 09:53:39.718111 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:39 crc kubenswrapper[5101]: E0122 09:53:39.818504 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:39 crc kubenswrapper[5101]: E0122 09:53:39.918866 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:40 crc kubenswrapper[5101]: E0122 09:53:40.019447 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:40 crc kubenswrapper[5101]: E0122 09:53:40.120510 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:40 crc kubenswrapper[5101]: E0122 09:53:40.220861 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:40 crc kubenswrapper[5101]: E0122 09:53:40.321002 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:40 crc kubenswrapper[5101]: E0122 09:53:40.421439 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:40 crc kubenswrapper[5101]: E0122 09:53:40.522553 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:40 crc kubenswrapper[5101]: I0122 09:53:40.528122 5101 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:53:40 crc kubenswrapper[5101]: I0122 09:53:40.529127 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:40 crc kubenswrapper[5101]: I0122 09:53:40.529179 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:40 crc kubenswrapper[5101]: I0122 09:53:40.529195 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:40 crc kubenswrapper[5101]: E0122 09:53:40.529743 5101 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 09:53:40 crc kubenswrapper[5101]: I0122 09:53:40.530034 5101 scope.go:117] "RemoveContainer" containerID="f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf" Jan 22 09:53:40 crc kubenswrapper[5101]: E0122 09:53:40.530240 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 09:53:40 crc kubenswrapper[5101]: E0122 09:53:40.623573 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:40 crc kubenswrapper[5101]: E0122 09:53:40.724038 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:40 crc kubenswrapper[5101]: E0122 09:53:40.824350 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:40 crc kubenswrapper[5101]: E0122 09:53:40.924903 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:41 crc kubenswrapper[5101]: E0122 09:53:41.025295 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:41 crc kubenswrapper[5101]: E0122 09:53:41.126097 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:41 crc kubenswrapper[5101]: E0122 09:53:41.226283 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:41 crc kubenswrapper[5101]: E0122 09:53:41.327309 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:41 crc kubenswrapper[5101]: E0122 09:53:41.428201 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:41 crc kubenswrapper[5101]: E0122 09:53:41.529064 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:41 crc kubenswrapper[5101]: E0122 09:53:41.629865 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:41 crc kubenswrapper[5101]: E0122 09:53:41.730094 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:41 crc kubenswrapper[5101]: E0122 09:53:41.830503 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:41 crc kubenswrapper[5101]: E0122 09:53:41.930978 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:42 crc kubenswrapper[5101]: E0122 09:53:42.031918 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:42 crc kubenswrapper[5101]: E0122 09:53:42.132015 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:42 crc kubenswrapper[5101]: E0122 09:53:42.232483 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:42 crc kubenswrapper[5101]: E0122 09:53:42.333678 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:42 crc kubenswrapper[5101]: E0122 09:53:42.434353 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:42 crc kubenswrapper[5101]: E0122 09:53:42.535904 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:42 crc kubenswrapper[5101]: E0122 09:53:42.637010 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:42 crc kubenswrapper[5101]: E0122 09:53:42.737489 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:42 crc kubenswrapper[5101]: E0122 09:53:42.838230 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:42 crc kubenswrapper[5101]: E0122 09:53:42.939334 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.039989 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.140790 5101 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.175615 5101 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.228024 5101 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.237254 5101 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.243082 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.243136 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.243150 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.243170 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.243184 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:43Z","lastTransitionTime":"2026-01-22T09:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.249356 5101 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.339871 5101 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.345359 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.345400 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.345410 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.345443 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.345456 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:43Z","lastTransitionTime":"2026-01-22T09:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.367106 5101 apiserver.go:52] "Watching apiserver" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.374311 5101 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.374715 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-svfcw","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-hzj6r","openshift-multus/network-metrics-daemon-2kpwn","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-machine-config-operator/machine-config-daemon-m45mk","openshift-multus/multus-7sbs4","openshift-multus/multus-additional-cni-plugins-642cb","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg","openshift-dns/node-resolver-69rm7","openshift-etcd/etcd-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl"] Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.376176 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.376304 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.376410 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.376835 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.376959 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.377949 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.380216 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.382567 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.382717 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.385269 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.385340 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.385354 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.385522 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.385273 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.385274 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.386061 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.386477 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.390689 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.406252 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.416043 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.418676 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.418763 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.419154 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.419359 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.419403 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.420656 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.421019 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.421314 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.423584 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.423870 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.426446 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-svfcw" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.429191 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.430455 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.431640 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.431693 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.431889 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.432226 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.432300 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.432640 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.432710 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.432827 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.435270 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.435589 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-69rm7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.435851 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.435962 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2kpwn" podUID="4d9d0a50-8eab-4184-b6dc-38872680242c" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.437345 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.437723 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.439411 5101 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.439499 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.440285 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.442564 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.443184 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.443214 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.443532 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.443887 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.445098 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.445819 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.448169 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.448328 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.449289 5101 scope.go:117] "RemoveContainer" containerID="f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.449839 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.450013 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.451150 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454264 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-run-multus-certs\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454315 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9ddc1292-91f9-4766-9422-1ccd8ae15b14-tuning-conf-dir\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454346 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9ddc1292-91f9-4766-9422-1ccd8ae15b14-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454372 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-cnibin\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454448 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454455 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454480 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5d3ae802-a3c8-4036-8eb6-239ae62f957e-cni-binary-copy\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454511 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-var-lib-cni-multus\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454536 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-os-release\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454562 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-run-k8s-cni-cncf-io\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454591 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/9ddc1292-91f9-4766-9422-1ccd8ae15b14-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454619 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-multus-conf-dir\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454644 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8450e755-f74e-492f-8007-24e3410a8926-proxy-tls\") pod \"machine-config-daemon-m45mk\" (UID: \"8450e755-f74e-492f-8007-24e3410a8926\") " pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454675 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454706 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9ddc1292-91f9-4766-9422-1ccd8ae15b14-os-release\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454734 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/da0ffdf3-f312-4d31-853b-eae129062d58-serviceca\") pod \"node-ca-svfcw\" (UID: \"da0ffdf3-f312-4d31-853b-eae129062d58\") " pod="openshift-image-registry/node-ca-svfcw" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454763 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb4jq\" (UniqueName: \"kubernetes.io/projected/da0ffdf3-f312-4d31-853b-eae129062d58-kube-api-access-tb4jq\") pod \"node-ca-svfcw\" (UID: \"da0ffdf3-f312-4d31-853b-eae129062d58\") " pod="openshift-image-registry/node-ca-svfcw" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.454991 5101 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.454997 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.455171 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:43.955128832 +0000 UTC m=+96.398759109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.455245 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.455319 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9ddc1292-91f9-4766-9422-1ccd8ae15b14-cni-binary-copy\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.455389 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.455439 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.455451 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.455468 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.455477 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:43Z","lastTransitionTime":"2026-01-22T09:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.455608 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5d3ae802-a3c8-4036-8eb6-239ae62f957e-multus-daemon-config\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456240 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-var-lib-cni-bin\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456304 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-multus-cni-dir\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456375 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456451 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-etc-kubernetes\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456480 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456506 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ddc1292-91f9-4766-9422-1ccd8ae15b14-system-cni-dir\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456532 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/da0ffdf3-f312-4d31-853b-eae129062d58-host\") pod \"node-ca-svfcw\" (UID: \"da0ffdf3-f312-4d31-853b-eae129062d58\") " pod="openshift-image-registry/node-ca-svfcw" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456560 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-var-lib-kubelet\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456611 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5spc8\" (UniqueName: \"kubernetes.io/projected/5d3ae802-a3c8-4036-8eb6-239ae62f957e-kube-api-access-5spc8\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456692 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9ddc1292-91f9-4766-9422-1ccd8ae15b14-cnibin\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456820 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8450e755-f74e-492f-8007-24e3410a8926-rootfs\") pod \"machine-config-daemon-m45mk\" (UID: \"8450e755-f74e-492f-8007-24e3410a8926\") " pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456882 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t5vp\" (UniqueName: \"kubernetes.io/projected/8450e755-f74e-492f-8007-24e3410a8926-kube-api-access-5t5vp\") pod \"machine-config-daemon-m45mk\" (UID: \"8450e755-f74e-492f-8007-24e3410a8926\") " pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.456944 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457057 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-hostroot\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457071 5101 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457102 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hhpb\" (UniqueName: \"kubernetes.io/projected/9ddc1292-91f9-4766-9422-1ccd8ae15b14-kube-api-access-7hhpb\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457143 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457170 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-run-netns\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457212 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457245 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457274 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-multus-socket-dir-parent\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457308 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457341 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457370 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457403 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457541 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-system-cni-dir\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457592 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8450e755-f74e-492f-8007-24e3410a8926-mcd-auth-proxy-config\") pod \"machine-config-daemon-m45mk\" (UID: \"8450e755-f74e-492f-8007-24e3410a8926\") " pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.457748 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.458464 5101 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.458886 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.458933 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.458556 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:43.958537587 +0000 UTC m=+96.402167854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.467968 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.468154 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.471041 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.471284 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.471369 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.471458 5101 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.471619 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:43.971596542 +0000 UTC m=+96.415226809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.472642 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.472686 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.472704 5101 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.472820 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:43.972789585 +0000 UTC m=+96.416419952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.476036 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.477201 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.477959 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.483499 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.493652 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.505726 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.518600 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ddc1292-91f9-4766-9422-1ccd8ae15b14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-642cb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.529140 5101 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.534209 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.538731 5101 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.538769 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.545601 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.558367 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.558429 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.558443 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.558463 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.558479 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:43Z","lastTransitionTime":"2026-01-22T09:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.558572 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.558639 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.558667 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.558698 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.558725 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.558940 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.558993 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559017 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559035 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559056 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559108 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559284 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559519 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559703 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559788 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559831 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559869 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559896 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559928 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559956 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559984 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560010 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560035 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560000 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560060 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560087 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560115 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560135 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560147 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560158 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560174 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560192 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560207 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.559948 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560239 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560274 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560309 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560336 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560339 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560370 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560519 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560559 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560587 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560609 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560628 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560658 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560694 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560716 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560746 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560771 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560790 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560810 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560833 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560860 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560883 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560914 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560945 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560961 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561038 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561083 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561116 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561141 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561161 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561177 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561196 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561216 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561235 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561261 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561303 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561324 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561408 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561476 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561493 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561509 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561526 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561543 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561561 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561580 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561597 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561615 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561637 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561656 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561674 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561693 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562302 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562341 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562367 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562385 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562445 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562467 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562484 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562501 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562518 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562539 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562554 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562572 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562591 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562609 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562627 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562643 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562660 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562677 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562696 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562712 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562732 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562751 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562768 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562788 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562807 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562826 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562842 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562861 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562883 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562901 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562919 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562935 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562953 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562970 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562986 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563005 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563023 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563041 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563058 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563075 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563111 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563129 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563147 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563163 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563186 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563203 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563222 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563240 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563258 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563276 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563293 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563353 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563375 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563392 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563412 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563454 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563478 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563498 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563518 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563536 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563556 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563574 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563594 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563616 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563640 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563686 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563709 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563729 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563747 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563766 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563787 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563804 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563823 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563844 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563862 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563883 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563904 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563923 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563941 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563960 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563985 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564013 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564045 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564071 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564095 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564113 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564137 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564155 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564175 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566647 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svfcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0ffdf3-f312-4d31-853b-eae129062d58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tb4jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svfcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566714 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566774 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566809 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566843 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566884 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566920 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566951 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566989 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560601 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567015 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560583 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567045 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567078 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567109 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567133 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567160 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567185 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567212 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567238 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567267 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567293 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567325 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567350 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567375 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567404 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567458 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567574 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567604 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567633 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567658 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567688 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567713 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567741 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567773 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567889 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567922 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567954 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567981 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568011 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568039 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568068 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568100 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568127 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568154 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568181 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568215 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568244 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568271 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568297 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568326 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568353 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568378 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568408 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568456 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568492 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568521 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568548 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568586 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568613 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568645 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568736 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-run-openvswitch\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568783 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-os-release\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568816 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-run-k8s-cni-cncf-io\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568845 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-systemd-units\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568870 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-slash\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568917 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/9ddc1292-91f9-4766-9422-1ccd8ae15b14-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568949 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-multus-conf-dir\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568977 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8450e755-f74e-492f-8007-24e3410a8926-proxy-tls\") pod \"machine-config-daemon-m45mk\" (UID: \"8450e755-f74e-492f-8007-24e3410a8926\") " pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569008 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9ddc1292-91f9-4766-9422-1ccd8ae15b14-os-release\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569031 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-node-log\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569058 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-log-socket\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569084 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/588311af-b91e-4596-931b-bcb1869b181a-ovnkube-config\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569116 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s976\" (UniqueName: \"kubernetes.io/projected/b6cc8637-155e-4b29-97f3-fe9a65c4a539-kube-api-access-7s976\") pod \"ovnkube-control-plane-57b78d8988-vkvtg\" (UID: \"b6cc8637-155e-4b29-97f3-fe9a65c4a539\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569162 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/da0ffdf3-f312-4d31-853b-eae129062d58-serviceca\") pod \"node-ca-svfcw\" (UID: \"da0ffdf3-f312-4d31-853b-eae129062d58\") " pod="openshift-image-registry/node-ca-svfcw" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569188 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tb4jq\" (UniqueName: \"kubernetes.io/projected/da0ffdf3-f312-4d31-853b-eae129062d58-kube-api-access-tb4jq\") pod \"node-ca-svfcw\" (UID: \"da0ffdf3-f312-4d31-853b-eae129062d58\") " pod="openshift-image-registry/node-ca-svfcw" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569218 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569247 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-kubelet\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569287 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9ddc1292-91f9-4766-9422-1ccd8ae15b14-cni-binary-copy\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569317 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5d3ae802-a3c8-4036-8eb6-239ae62f957e-multus-daemon-config\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569347 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-run-netns\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569374 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b6cc8637-155e-4b29-97f3-fe9a65c4a539-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-vkvtg\" (UID: \"b6cc8637-155e-4b29-97f3-fe9a65c4a539\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569401 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b6cc8637-155e-4b29-97f3-fe9a65c4a539-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-vkvtg\" (UID: \"b6cc8637-155e-4b29-97f3-fe9a65c4a539\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569511 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-var-lib-cni-bin\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569558 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-multus-cni-dir\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569591 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxg45\" (UniqueName: \"kubernetes.io/projected/871e02bc-7882-434e-bd9f-e93a2375d495-kube-api-access-xxg45\") pod \"node-resolver-69rm7\" (UID: \"871e02bc-7882-434e-bd9f-e93a2375d495\") " pod="openshift-dns/node-resolver-69rm7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569619 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/588311af-b91e-4596-931b-bcb1869b181a-ovn-node-metrics-cert\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569653 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-etc-kubernetes\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569676 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-run-ovn\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569701 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/588311af-b91e-4596-931b-bcb1869b181a-env-overrides\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569756 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ddc1292-91f9-4766-9422-1ccd8ae15b14-system-cni-dir\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569787 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/da0ffdf3-f312-4d31-853b-eae129062d58-host\") pod \"node-ca-svfcw\" (UID: \"da0ffdf3-f312-4d31-853b-eae129062d58\") " pod="openshift-image-registry/node-ca-svfcw" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569812 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-var-lib-kubelet\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569844 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5spc8\" (UniqueName: \"kubernetes.io/projected/5d3ae802-a3c8-4036-8eb6-239ae62f957e-kube-api-access-5spc8\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569873 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-cni-netd\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569900 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/588311af-b91e-4596-931b-bcb1869b181a-ovnkube-script-lib\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569947 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9ddc1292-91f9-4766-9422-1ccd8ae15b14-cnibin\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569979 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8450e755-f74e-492f-8007-24e3410a8926-rootfs\") pod \"machine-config-daemon-m45mk\" (UID: \"8450e755-f74e-492f-8007-24e3410a8926\") " pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570013 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5t5vp\" (UniqueName: \"kubernetes.io/projected/8450e755-f74e-492f-8007-24e3410a8926-kube-api-access-5t5vp\") pod \"machine-config-daemon-m45mk\" (UID: \"8450e755-f74e-492f-8007-24e3410a8926\") " pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570048 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs\") pod \"network-metrics-daemon-2kpwn\" (UID: \"4d9d0a50-8eab-4184-b6dc-38872680242c\") " pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570076 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570105 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-hostroot\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570141 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7hhpb\" (UniqueName: \"kubernetes.io/projected/9ddc1292-91f9-4766-9422-1ccd8ae15b14-kube-api-access-7hhpb\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570191 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-run-netns\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570220 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/871e02bc-7882-434e-bd9f-e93a2375d495-tmp-dir\") pod \"node-resolver-69rm7\" (UID: \"871e02bc-7882-434e-bd9f-e93a2375d495\") " pod="openshift-dns/node-resolver-69rm7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570248 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-cni-bin\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570281 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b6cc8637-155e-4b29-97f3-fe9a65c4a539-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-vkvtg\" (UID: \"b6cc8637-155e-4b29-97f3-fe9a65c4a539\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570344 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-multus-socket-dir-parent\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570382 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-run-systemd\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570441 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570477 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-system-cni-dir\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570506 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8450e755-f74e-492f-8007-24e3410a8926-mcd-auth-proxy-config\") pod \"machine-config-daemon-m45mk\" (UID: \"8450e755-f74e-492f-8007-24e3410a8926\") " pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570538 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxpdh\" (UniqueName: \"kubernetes.io/projected/4d9d0a50-8eab-4184-b6dc-38872680242c-kube-api-access-rxpdh\") pod \"network-metrics-daemon-2kpwn\" (UID: \"4d9d0a50-8eab-4184-b6dc-38872680242c\") " pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570564 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-var-lib-openvswitch\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570591 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm47c\" (UniqueName: \"kubernetes.io/projected/588311af-b91e-4596-931b-bcb1869b181a-kube-api-access-wm47c\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570623 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-run-multus-certs\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570654 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9ddc1292-91f9-4766-9422-1ccd8ae15b14-tuning-conf-dir\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570712 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9ddc1292-91f9-4766-9422-1ccd8ae15b14-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570744 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-cnibin\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570773 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/871e02bc-7882-434e-bd9f-e93a2375d495-hosts-file\") pod \"node-resolver-69rm7\" (UID: \"871e02bc-7882-434e-bd9f-e93a2375d495\") " pod="openshift-dns/node-resolver-69rm7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570802 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-run-ovn-kubernetes\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570844 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5d3ae802-a3c8-4036-8eb6-239ae62f957e-cni-binary-copy\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570890 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-var-lib-cni-multus\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570922 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-etc-openvswitch\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571003 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571021 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571052 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571079 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571094 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571107 5101 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571120 5101 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571133 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571146 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571158 5101 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.577101 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9ddc1292-91f9-4766-9422-1ccd8ae15b14-cnibin\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.577215 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8450e755-f74e-492f-8007-24e3410a8926-rootfs\") pod \"machine-config-daemon-m45mk\" (UID: \"8450e755-f74e-492f-8007-24e3410a8926\") " pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.577504 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.577624 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-system-cni-dir\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.577849 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-hostroot\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.578112 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-run-netns\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.578222 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/9ddc1292-91f9-4766-9422-1ccd8ae15b14-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.578260 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-multus-socket-dir-parent\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.578324 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-multus-conf-dir\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.578436 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-var-lib-cni-bin\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.578520 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-multus-cni-dir\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.578618 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8450e755-f74e-492f-8007-24e3410a8926-mcd-auth-proxy-config\") pod \"machine-config-daemon-m45mk\" (UID: \"8450e755-f74e-492f-8007-24e3410a8926\") " pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560653 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560765 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561169 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.560830 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561291 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561385 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561670 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561680 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.561752 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562159 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562258 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562240 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562639 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562656 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.562832 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563092 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563455 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563478 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563568 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.563713 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564022 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564161 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564208 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564471 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.580149 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564782 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564880 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564952 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.564941 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.565524 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.565530 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.565655 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.565980 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566163 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566178 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566228 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566438 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.566610 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567153 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567359 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567396 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567431 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.567624 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568359 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568453 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568618 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568639 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568714 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.568834 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569163 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569274 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569336 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569356 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569716 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569844 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.569937 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570043 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570163 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570178 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570364 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570611 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570605 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570610 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570632 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570633 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570826 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570832 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570846 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570865 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570973 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570876 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.570999 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571012 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571040 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571261 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571391 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571561 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571609 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571617 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571685 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571950 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.571981 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.572128 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.572542 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.573060 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.573607 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.573712 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.574106 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.574171 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.574169 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.581243 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.574186 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.574308 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.582057 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9ddc1292-91f9-4766-9422-1ccd8ae15b14-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.581255 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-etc-kubernetes\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.574808 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.575042 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.575167 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.575390 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.575360 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.575534 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.575550 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.575555 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.575561 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.575637 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.575825 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.575841 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.576034 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.576208 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.576235 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.576370 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.576383 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.576517 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.576534 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.576585 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.576803 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.577479 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.577493 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.577600 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.577967 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.578175 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.578415 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.578834 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.579111 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.579180 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.579456 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.579775 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.579590 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.579859 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.579861 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:53:44.079830175 +0000 UTC m=+96.523460452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.582348 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9ddc1292-91f9-4766-9422-1ccd8ae15b14-os-release\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.582554 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-var-lib-kubelet\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.579869 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.580395 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.580767 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.580845 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.574466 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.582276 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ddc1292-91f9-4766-9422-1ccd8ae15b14-system-cni-dir\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.583077 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5d3ae802-a3c8-4036-8eb6-239ae62f957e-cni-binary-copy\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.583143 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-var-lib-cni-multus\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.583181 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/da0ffdf3-f312-4d31-853b-eae129062d58-host\") pod \"node-ca-svfcw\" (UID: \"da0ffdf3-f312-4d31-853b-eae129062d58\") " pod="openshift-image-registry/node-ca-svfcw" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.583197 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-cnibin\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.583183 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6cc8637-155e-4b29-97f3-fe9a65c4a539\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s976\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s976\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-vkvtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.587724 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.587770 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.588123 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.589513 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8450e755-f74e-492f-8007-24e3410a8926-proxy-tls\") pod \"machine-config-daemon-m45mk\" (UID: \"8450e755-f74e-492f-8007-24e3410a8926\") " pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.589737 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.589777 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.589819 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.589897 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.589936 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.590194 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.590340 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.590438 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-run-k8s-cni-cncf-io\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.590606 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9ddc1292-91f9-4766-9422-1ccd8ae15b14-tuning-conf-dir\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.590701 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-host-run-multus-certs\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.590785 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5d3ae802-a3c8-4036-8eb6-239ae62f957e-os-release\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.590809 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.590881 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.590973 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5d3ae802-a3c8-4036-8eb6-239ae62f957e-multus-daemon-config\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.591114 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.591629 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.591919 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.592021 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.592082 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.592279 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.592674 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.592957 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.593032 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.592956 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.593155 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.593242 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9ddc1292-91f9-4766-9422-1ccd8ae15b14-cni-binary-copy\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.593528 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.593691 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.593828 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.594212 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.594483 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.594558 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.595466 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.595516 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.595700 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.596710 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.596835 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hhpb\" (UniqueName: \"kubernetes.io/projected/9ddc1292-91f9-4766-9422-1ccd8ae15b14-kube-api-access-7hhpb\") pod \"multus-additional-cni-plugins-642cb\" (UID: \"9ddc1292-91f9-4766-9422-1ccd8ae15b14\") " pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.597133 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/da0ffdf3-f312-4d31-853b-eae129062d58-serviceca\") pod \"node-ca-svfcw\" (UID: \"da0ffdf3-f312-4d31-853b-eae129062d58\") " pod="openshift-image-registry/node-ca-svfcw" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.597260 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t5vp\" (UniqueName: \"kubernetes.io/projected/8450e755-f74e-492f-8007-24e3410a8926-kube-api-access-5t5vp\") pod \"machine-config-daemon-m45mk\" (UID: \"8450e755-f74e-492f-8007-24e3410a8926\") " pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.597273 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.597408 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.597669 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.597750 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.597994 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.598030 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.598131 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.599604 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.603499 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.604432 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.604623 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.604750 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.604979 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.605076 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.605287 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.605538 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.605574 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.606044 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.607161 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.607282 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.607305 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.607358 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.607340 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.607916 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.607933 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.608158 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.608165 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.608245 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.608259 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.608364 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.608742 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.609667 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.609665 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5spc8\" (UniqueName: \"kubernetes.io/projected/5d3ae802-a3c8-4036-8eb6-239ae62f957e-kube-api-access-5spc8\") pod \"multus-7sbs4\" (UID: \"5d3ae802-a3c8-4036-8eb6-239ae62f957e\") " pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.610118 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.610155 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.610200 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.610283 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.610567 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.610737 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.610796 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.611459 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.611726 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.612019 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.612263 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.612467 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.612744 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb4jq\" (UniqueName: \"kubernetes.io/projected/da0ffdf3-f312-4d31-853b-eae129062d58-kube-api-access-tb4jq\") pod \"node-ca-svfcw\" (UID: \"da0ffdf3-f312-4d31-853b-eae129062d58\") " pod="openshift-image-registry/node-ca-svfcw" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.613899 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.620543 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.620973 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.626305 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.630504 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.632493 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.638870 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.645205 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.647925 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ddc1292-91f9-4766-9422-1ccd8ae15b14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-642cb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.658309 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-69rm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"871e02bc-7882-434e-bd9f-e93a2375d495\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxg45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69rm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.661225 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.661283 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.661303 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.661365 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.661374 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:43Z","lastTransitionTime":"2026-01-22T09:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.668868 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2kpwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d9d0a50-8eab-4184-b6dc-38872680242c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxpdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxpdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2kpwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672088 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-node-log\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672142 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-log-socket\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672163 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/588311af-b91e-4596-931b-bcb1869b181a-ovnkube-config\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672184 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7s976\" (UniqueName: \"kubernetes.io/projected/b6cc8637-155e-4b29-97f3-fe9a65c4a539-kube-api-access-7s976\") pod \"ovnkube-control-plane-57b78d8988-vkvtg\" (UID: \"b6cc8637-155e-4b29-97f3-fe9a65c4a539\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672210 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-kubelet\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672230 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-run-netns\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672252 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b6cc8637-155e-4b29-97f3-fe9a65c4a539-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-vkvtg\" (UID: \"b6cc8637-155e-4b29-97f3-fe9a65c4a539\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672271 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b6cc8637-155e-4b29-97f3-fe9a65c4a539-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-vkvtg\" (UID: \"b6cc8637-155e-4b29-97f3-fe9a65c4a539\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672295 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xxg45\" (UniqueName: \"kubernetes.io/projected/871e02bc-7882-434e-bd9f-e93a2375d495-kube-api-access-xxg45\") pod \"node-resolver-69rm7\" (UID: \"871e02bc-7882-434e-bd9f-e93a2375d495\") " pod="openshift-dns/node-resolver-69rm7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672316 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/588311af-b91e-4596-931b-bcb1869b181a-ovn-node-metrics-cert\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672337 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-run-ovn\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672357 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/588311af-b91e-4596-931b-bcb1869b181a-env-overrides\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672397 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-cni-netd\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672444 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/588311af-b91e-4596-931b-bcb1869b181a-ovnkube-script-lib\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672474 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs\") pod \"network-metrics-daemon-2kpwn\" (UID: \"4d9d0a50-8eab-4184-b6dc-38872680242c\") " pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.672527 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.673111 5101 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.673201 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs podName:4d9d0a50-8eab-4184-b6dc-38872680242c nodeName:}" failed. No retries permitted until 2026-01-22 09:53:44.173182412 +0000 UTC m=+96.616812679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs") pod "network-metrics-daemon-2kpwn" (UID: "4d9d0a50-8eab-4184-b6dc-38872680242c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.673195 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-run-ovn\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.673226 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-cni-netd\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.673464 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-kubelet\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.673503 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-node-log\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.673531 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.673554 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/871e02bc-7882-434e-bd9f-e93a2375d495-tmp-dir\") pod \"node-resolver-69rm7\" (UID: \"871e02bc-7882-434e-bd9f-e93a2375d495\") " pod="openshift-dns/node-resolver-69rm7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.673849 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-cni-bin\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.673936 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b6cc8637-155e-4b29-97f3-fe9a65c4a539-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-vkvtg\" (UID: \"b6cc8637-155e-4b29-97f3-fe9a65c4a539\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674017 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-cni-bin\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674034 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-run-systemd\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.673620 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-log-socket\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674045 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/871e02bc-7882-434e-bd9f-e93a2375d495-tmp-dir\") pod \"node-resolver-69rm7\" (UID: \"871e02bc-7882-434e-bd9f-e93a2375d495\") " pod="openshift-dns/node-resolver-69rm7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.673675 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b6cc8637-155e-4b29-97f3-fe9a65c4a539-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-vkvtg\" (UID: \"b6cc8637-155e-4b29-97f3-fe9a65c4a539\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.673710 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-run-netns\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674256 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-run-systemd\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674344 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rxpdh\" (UniqueName: \"kubernetes.io/projected/4d9d0a50-8eab-4184-b6dc-38872680242c-kube-api-access-rxpdh\") pod \"network-metrics-daemon-2kpwn\" (UID: \"4d9d0a50-8eab-4184-b6dc-38872680242c\") " pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674415 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-var-lib-openvswitch\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674540 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wm47c\" (UniqueName: \"kubernetes.io/projected/588311af-b91e-4596-931b-bcb1869b181a-kube-api-access-wm47c\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674629 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/871e02bc-7882-434e-bd9f-e93a2375d495-hosts-file\") pod \"node-resolver-69rm7\" (UID: \"871e02bc-7882-434e-bd9f-e93a2375d495\") " pod="openshift-dns/node-resolver-69rm7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674697 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-run-ovn-kubernetes\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674790 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-etc-openvswitch\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674860 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-run-openvswitch\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.675078 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-systemd-units\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.675193 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-slash\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.675275 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-systemd-units\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.675317 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-etc-openvswitch\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674801 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/588311af-b91e-4596-931b-bcb1869b181a-ovnkube-config\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.675275 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-slash\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.675300 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-host-run-ovn-kubernetes\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.674838 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-var-lib-openvswitch\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.675864 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/871e02bc-7882-434e-bd9f-e93a2375d495-hosts-file\") pod \"node-resolver-69rm7\" (UID: \"871e02bc-7882-434e-bd9f-e93a2375d495\") " pod="openshift-dns/node-resolver-69rm7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676003 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/588311af-b91e-4596-931b-bcb1869b181a-run-openvswitch\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676004 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/588311af-b91e-4596-931b-bcb1869b181a-ovnkube-script-lib\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676049 5101 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676216 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676281 5101 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676318 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/588311af-b91e-4596-931b-bcb1869b181a-env-overrides\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676338 5101 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676501 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676574 5101 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676592 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676633 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676648 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676661 5101 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676674 5101 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676686 5101 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676719 5101 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676731 5101 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676745 5101 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676755 5101 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676767 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676801 5101 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676812 5101 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676824 5101 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676835 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676849 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b6cc8637-155e-4b29-97f3-fe9a65c4a539-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-vkvtg\" (UID: \"b6cc8637-155e-4b29-97f3-fe9a65c4a539\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676846 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676883 5101 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676907 5101 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676919 5101 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676951 5101 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676968 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676979 5101 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.676991 5101 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677002 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677036 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677048 5101 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677060 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677072 5101 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677083 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677117 5101 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677128 5101 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677140 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677155 5101 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677167 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677202 5101 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677215 5101 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677228 5101 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677239 5101 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677250 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677262 5101 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677294 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677307 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677318 5101 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677329 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677339 5101 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677350 5101 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677361 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677372 5101 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677384 5101 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677395 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677408 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677480 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677492 5101 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677503 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677515 5101 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677525 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677540 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677551 5101 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677564 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677576 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677589 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677601 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677615 5101 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677627 5101 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677639 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677652 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677692 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677707 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677720 5101 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677731 5101 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677743 5101 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677754 5101 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677765 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677777 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677788 5101 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677798 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677809 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677820 5101 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677834 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677846 5101 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677857 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677872 5101 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677885 5101 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677896 5101 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677908 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677936 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677947 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677958 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677971 5101 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677982 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.677995 5101 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678007 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678019 5101 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678030 5101 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678041 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678078 5101 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678089 5101 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678100 5101 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678112 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678123 5101 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678133 5101 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678143 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678154 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678164 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678175 5101 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678185 5101 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678195 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678218 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678230 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678240 5101 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678252 5101 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678262 5101 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678273 5101 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678284 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678294 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678305 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678315 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678328 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678339 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678349 5101 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678361 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678371 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678382 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678391 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678402 5101 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678412 5101 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678437 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678448 5101 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682301 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682316 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682327 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682357 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682368 5101 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682380 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682392 5101 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682409 5101 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682510 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682520 5101 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682532 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682545 5101 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682555 5101 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682564 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682577 5101 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682587 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682599 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682610 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.682623 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.683000 5101 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.683097 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.683189 5101 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.683251 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b6cc8637-155e-4b29-97f3-fe9a65c4a539-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-vkvtg\" (UID: \"b6cc8637-155e-4b29-97f3-fe9a65c4a539\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.678081 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/588311af-b91e-4596-931b-bcb1869b181a-ovn-node-metrics-cert\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.683263 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684030 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684063 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684083 5101 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684095 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684115 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684125 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684135 5101 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684145 5101 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684161 5101 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684210 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684223 5101 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684232 5101 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684246 5101 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684287 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684299 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684314 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684324 5101 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684364 5101 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.684376 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.687180 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"588311af-b91e-4596-931b-bcb1869b181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689110 5101 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689138 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689150 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689163 5101 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689173 5101 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689183 5101 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689193 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689212 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689222 5101 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689232 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689244 5101 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689256 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689265 5101 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689302 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689313 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689519 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689539 5101 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689710 5101 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689736 5101 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689750 5101 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689761 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689770 5101 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689779 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689792 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689800 5101 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689812 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689822 5101 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689834 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689846 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689857 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689868 5101 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689877 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689885 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689894 5101 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689905 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.689915 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.692498 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxg45\" (UniqueName: \"kubernetes.io/projected/871e02bc-7882-434e-bd9f-e93a2375d495-kube-api-access-xxg45\") pod \"node-resolver-69rm7\" (UID: \"871e02bc-7882-434e-bd9f-e93a2375d495\") " pod="openshift-dns/node-resolver-69rm7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.692723 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm47c\" (UniqueName: \"kubernetes.io/projected/588311af-b91e-4596-931b-bcb1869b181a-kube-api-access-wm47c\") pod \"ovnkube-node-hzj6r\" (UID: \"588311af-b91e-4596-931b-bcb1869b181a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.693907 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxpdh\" (UniqueName: \"kubernetes.io/projected/4d9d0a50-8eab-4184-b6dc-38872680242c-kube-api-access-rxpdh\") pod \"network-metrics-daemon-2kpwn\" (UID: \"4d9d0a50-8eab-4184-b6dc-38872680242c\") " pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.695409 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s976\" (UniqueName: \"kubernetes.io/projected/b6cc8637-155e-4b29-97f3-fe9a65c4a539-kube-api-access-7s976\") pod \"ovnkube-control-plane-57b78d8988-vkvtg\" (UID: \"b6cc8637-155e-4b29-97f3-fe9a65c4a539\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.697321 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.702531 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebfa479c-e165-476d-bd0f-766a025a73ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:53:20Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0122 09:53:20.240467 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:53:20.240646 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 09:53:20.241560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3163626642/tls.crt::/tmp/serving-cert-3163626642/tls.key\\\\\\\"\\\\nI0122 09:53:20.630389 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:53:20.632100 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:53:20.632826 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:53:20.632961 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:53:20.632979 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:53:20.637497 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:53:20.637596 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:53:20.637602 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:53:20.637607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:53:20.637610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:53:20.637613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:53:20.637632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:53:20.637518 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:53:20.640494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:53:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:52:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.703216 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.711995 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 09:53:43 crc kubenswrapper[5101]: W0122 09:53:43.724613 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-e540207c6fc8f24eeee03e996d72bd52fa9906743a029cb4ae97692726bfa909 WatchSource:0}: Error finding container e540207c6fc8f24eeee03e996d72bd52fa9906743a029cb4ae97692726bfa909: Status 404 returned error can't find the container with id e540207c6fc8f24eeee03e996d72bd52fa9906743a029cb4ae97692726bfa909 Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.724868 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b31c043a-4fc8-4e04-8704-c46a6a322c78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://71296c0273ec3f2a6ae3beabb36bbaee0f0b2dc3917cbd4527cb3350a48f471d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://16a2b2622abd10aa45112a1ee93f3e60e153620424b6d9c047c6f8b2eaf54120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7704a25ab383ef9175497e0e37c2a841354084929391ce2b90efd2dd35aabf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9fb34bb1e8b777fcb9f3bd8727c917e980b2e42d41c032a6bcb123864464c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://fc177986df81bd5ef6c6d984c1b802703e93e08d161ac812ff2ecfe1ab8c25a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://56b6ac3c0342faa32f6a25460fbf9cc517ae38d8150ec17731ffceabbaf06303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b6ac3c0342faa32f6a25460fbf9cc517ae38d8150ec17731ffceabbaf06303\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:52:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a93d3e7f0e3cbabdb74cd6205277ecc524f24bbb13e64119552404b1be914ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a93d3e7f0e3cbabdb74cd6205277ecc524f24bbb13e64119552404b1be914ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8ad54eaaa4a1f5fcbcf758dc56f3c46e074f814983aa103ec941c0713dda324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ad54eaaa4a1f5fcbcf758dc56f3c46e074f814983aa103ec941c0713dda324\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:52:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.731687 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-642cb" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.736913 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.745853 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7sbs4" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.747808 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-7sbs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3ae802-a3c8-4036-8eb6-239ae62f957e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5spc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7sbs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: W0122 09:53:43.757599 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ddc1292_91f9_4766_9422_1ccd8ae15b14.slice/crio-3d33c94dd2841ba055f101a9ffaa5f700d45c16402d1bcccc85c3283709811c0 WatchSource:0}: Error finding container 3d33c94dd2841ba055f101a9ffaa5f700d45c16402d1bcccc85c3283709811c0: Status 404 returned error can't find the container with id 3d33c94dd2841ba055f101a9ffaa5f700d45c16402d1bcccc85c3283709811c0 Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.759070 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072eda2c-bb35-4a32-896f-cbe2c1f33b13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://333d46208c759a5c89c03961c535ca7fbac296abd5bcf55e6438be51c57d2418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0675c15aea42b9ac09729a37f01e833d6164f12d6ab14fd585684793a19207ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0675c15aea42b9ac09729a37f01e833d6164f12d6ab14fd585684793a19207ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:52:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:52:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.759214 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-svfcw" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.766590 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.766637 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.766649 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.766668 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.766679 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:43Z","lastTransitionTime":"2026-01-22T09:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.770520 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8450e755-f74e-492f-8007-24e3410a8926\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t5vp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t5vp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m45mk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.790573 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.799004 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-69rm7" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.809032 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.814746 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.871317 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.871355 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.871364 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.871377 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.871387 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:43Z","lastTransitionTime":"2026-01-22T09:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.977052 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.977091 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.977103 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.977122 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.977134 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:43Z","lastTransitionTime":"2026-01-22T09:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.993116 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.993253 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.993472 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:43 crc kubenswrapper[5101]: I0122 09:53:43.993546 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.993347 5101 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.993678 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.993702 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:44.993678765 +0000 UTC m=+97.437309032 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.993704 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.993726 5101 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.993748 5101 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.993759 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:44.993750337 +0000 UTC m=+97.437380684 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.993379 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.993788 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.993797 5101 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.993814 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:44.993795518 +0000 UTC m=+97.437425835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:53:43 crc kubenswrapper[5101]: E0122 09:53:43.993835 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:44.993826509 +0000 UTC m=+97.437456776 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.031359 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"91bd072ace8c873746bee2eb0b1ab534a1b2335c3c513e8048b73ff676277321"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.031404 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"3da5b70f78e1b1b74bb0537d4831c942464b9b12d5ebc196254189516ae70fd1"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.032334 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-svfcw" event={"ID":"da0ffdf3-f312-4d31-853b-eae129062d58","Type":"ContainerStarted","Data":"c5cfcc5064b8763f6bce7193add8cd621d8e270a141678e46c7ad6f1f6f571b5"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.035905 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"e540207c6fc8f24eeee03e996d72bd52fa9906743a029cb4ae97692726bfa909"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.037897 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" event={"ID":"588311af-b91e-4596-931b-bcb1869b181a","Type":"ContainerStarted","Data":"d9c1f3135ca272ae735a48876b08b60641605f3926b3fa654d23b8285ebbdd92"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.039662 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-69rm7" event={"ID":"871e02bc-7882-434e-bd9f-e93a2375d495","Type":"ContainerStarted","Data":"4fd1bd45aeba0c852bc4791f58049ea0f250a770cdedc44a10441d07fdc05f88"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.041828 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" event={"ID":"b6cc8637-155e-4b29-97f3-fe9a65c4a539","Type":"ContainerStarted","Data":"780e450263ba4206ea23188fd0fd4f31fcbc2ae093b68c095fdfb3699b3d7e8f"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.044005 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" event={"ID":"9ddc1292-91f9-4766-9422-1ccd8ae15b14","Type":"ContainerStarted","Data":"3d33c94dd2841ba055f101a9ffaa5f700d45c16402d1bcccc85c3283709811c0"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.045224 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" event={"ID":"8450e755-f74e-492f-8007-24e3410a8926","Type":"ContainerStarted","Data":"1578a2343f82e6eae79a890b8c0a6a0c1466623802ca9d1a4fd0316ccf5a4c5a"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.048083 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7sbs4" event={"ID":"5d3ae802-a3c8-4036-8eb6-239ae62f957e","Type":"ContainerStarted","Data":"e3bd8783175e13b08adcb42215d309966c9d13939f945449ba9ab2e106ff5bb2"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.048110 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7sbs4" event={"ID":"5d3ae802-a3c8-4036-8eb6-239ae62f957e","Type":"ContainerStarted","Data":"5ebe13203355367d12ef4e6d60dcdcaf4a850c30a387301d2b73ac82dbbb5d54"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.051314 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"da69b451e974412cb445385fe278affad61a8e8f2db9c74fc1e073269920ba3e"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.051348 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"e10ec561ba177fa4958bf97db22e33df94a1af23419e90425e5fce14ac51b49e"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.070599 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svfcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0ffdf3-f312-4d31-853b-eae129062d58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tb4jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svfcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.079065 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6cc8637-155e-4b29-97f3-fe9a65c4a539\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s976\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s976\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-vkvtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.148332 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:44 crc kubenswrapper[5101]: E0122 09:53:44.148747 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:53:45.148716156 +0000 UTC m=+97.592346423 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.206243 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.206293 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.206303 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.206320 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.206330 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:44Z","lastTransitionTime":"2026-01-22T09:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.206238 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f73c04be-5c54-4e75-b333-3e8f2c4e8cda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2b16d75a0b02329061065edfa62b83d1a4f07d842a78692e1f9d2132cc368d3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5537fc13540bda46d5a4c4b17f797bc83183247a32e70554b1635d1b974da1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2bed479d7ddba528ba362dfacf9a4c937cdaa5c6c47a0a85e257cdf288c8d832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89f4830ad58f953495c2fb4ac1e08c64d2fe2fd2607a324b0523b1ba4890d434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f4830ad58f953495c2fb4ac1e08c64d2fe2fd2607a324b0523b1ba4890d434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:52:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:52:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.236353 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.249040 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.295310 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs\") pod \"network-metrics-daemon-2kpwn\" (UID: \"4d9d0a50-8eab-4184-b6dc-38872680242c\") " pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:44 crc kubenswrapper[5101]: E0122 09:53:44.295770 5101 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:53:44 crc kubenswrapper[5101]: E0122 09:53:44.295902 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs podName:4d9d0a50-8eab-4184-b6dc-38872680242c nodeName:}" failed. No retries permitted until 2026-01-22 09:53:45.295878745 +0000 UTC m=+97.739509012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs") pod "network-metrics-daemon-2kpwn" (UID: "4d9d0a50-8eab-4184-b6dc-38872680242c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.317890 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.317933 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.317945 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.317961 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.317974 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:44Z","lastTransitionTime":"2026-01-22T09:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.317903 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.329446 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ddc1292-91f9-4766-9422-1ccd8ae15b14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hhpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-642cb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.390843 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-69rm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"871e02bc-7882-434e-bd9f-e93a2375d495\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxg45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69rm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.401363 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2kpwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d9d0a50-8eab-4184-b6dc-38872680242c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxpdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxpdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2kpwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.420651 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.420986 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.421003 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.421024 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.421032 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:44Z","lastTransitionTime":"2026-01-22T09:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.421713 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"588311af-b91e-4596-931b-bcb1869b181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.437552 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebfa479c-e165-476d-bd0f-766a025a73ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:53:20Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0122 09:53:20.240467 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:53:20.240646 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 09:53:20.241560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3163626642/tls.crt::/tmp/serving-cert-3163626642/tls.key\\\\\\\"\\\\nI0122 09:53:20.630389 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:53:20.632100 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:53:20.632826 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:53:20.632961 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:53:20.632979 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:53:20.637497 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:53:20.637596 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:53:20.637602 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:53:20.637607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:53:20.637610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:53:20.637613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:53:20.637632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:53:20.637518 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:53:20.640494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:53:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:52:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.458656 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b31c043a-4fc8-4e04-8704-c46a6a322c78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://71296c0273ec3f2a6ae3beabb36bbaee0f0b2dc3917cbd4527cb3350a48f471d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://16a2b2622abd10aa45112a1ee93f3e60e153620424b6d9c047c6f8b2eaf54120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7704a25ab383ef9175497e0e37c2a841354084929391ce2b90efd2dd35aabf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9fb34bb1e8b777fcb9f3bd8727c917e980b2e42d41c032a6bcb123864464c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://fc177986df81bd5ef6c6d984c1b802703e93e08d161ac812ff2ecfe1ab8c25a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://56b6ac3c0342faa32f6a25460fbf9cc517ae38d8150ec17731ffceabbaf06303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b6ac3c0342faa32f6a25460fbf9cc517ae38d8150ec17731ffceabbaf06303\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:52:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a93d3e7f0e3cbabdb74cd6205277ecc524f24bbb13e64119552404b1be914ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a93d3e7f0e3cbabdb74cd6205277ecc524f24bbb13e64119552404b1be914ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8ad54eaaa4a1f5fcbcf758dc56f3c46e074f814983aa103ec941c0713dda324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ad54eaaa4a1f5fcbcf758dc56f3c46e074f814983aa103ec941c0713dda324\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:52:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:52:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.468765 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.477197 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-7sbs4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d3ae802-a3c8-4036-8eb6-239ae62f957e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://e3bd8783175e13b08adcb42215d309966c9d13939f945449ba9ab2e106ff5bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:53:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5spc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7sbs4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.487672 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f43fc85-780c-4c64-8b7a-62c01d1037a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://86f3894614da3147910d65833f0c6bc7534abcecacc86ae6b4ab118351aa6206\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1e0fdef7e068877da6a86fa0b15c2d38514c28f6645ddbfab0a7598309b595a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://11c301b0019cc0ce818e211b588e3719290a84356f16119fa171487af85d21ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e94d99614d5483322fcd7d70509229b78e2636a87a3d822ede4e7731750d6ca5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:52:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.504023 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072eda2c-bb35-4a32-896f-cbe2c1f33b13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://333d46208c759a5c89c03961c535ca7fbac296abd5bcf55e6438be51c57d2418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0675c15aea42b9ac09729a37f01e833d6164f12d6ab14fd585684793a19207ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0675c15aea42b9ac09729a37f01e833d6164f12d6ab14fd585684793a19207ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:52:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:52:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.521241 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8450e755-f74e-492f-8007-24e3410a8926\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t5vp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t5vp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m45mk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.522621 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.522655 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.522667 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.522683 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.522694 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:44Z","lastTransitionTime":"2026-01-22T09:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.528414 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:44 crc kubenswrapper[5101]: E0122 09:53:44.528578 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.528967 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:44 crc kubenswrapper[5101]: E0122 09:53:44.529063 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2kpwn" podUID="4d9d0a50-8eab-4184-b6dc-38872680242c" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.529143 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:44 crc kubenswrapper[5101]: E0122 09:53:44.529221 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.532658 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.533397 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.534326 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.536078 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.537417 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.539527 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.541418 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.542653 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.544177 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.544899 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.547402 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.548787 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.549513 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.551228 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.552291 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.553897 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.554960 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.555863 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.557166 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.558262 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.558795 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-69rm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"871e02bc-7882-434e-bd9f-e93a2375d495\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxg45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69rm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.559679 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.560830 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.562335 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.564525 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.565563 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.567244 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.568210 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.569942 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.570096 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2kpwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d9d0a50-8eab-4184-b6dc-38872680242c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxpdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxpdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2kpwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.570823 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.572247 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.574760 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.575400 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.577105 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.578616 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.579854 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.581097 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.597082 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.598059 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.598887 5101 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.599627 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.602771 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.607780 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.608935 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.610694 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.612273 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.614950 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.616111 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.617574 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.618973 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.621925 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.623599 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.624300 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.625694 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.626391 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.627634 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.628624 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.630519 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.631290 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.632911 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.633950 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.635767 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"588311af-b91e-4596-931b-bcb1869b181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:53:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.636724 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.636766 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.636779 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.636802 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.636814 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:44Z","lastTransitionTime":"2026-01-22T09:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.695884 5101 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebfa479c-e165-476d-bd0f-766a025a73ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:53:20Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0122 09:53:20.240467 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:53:20.240646 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 09:53:20.241560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3163626642/tls.crt::/tmp/serving-cert-3163626642/tls.key\\\\\\\"\\\\nI0122 09:53:20.630389 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:53:20.632100 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:53:20.632826 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:53:20.632961 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:53:20.632979 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:53:20.637497 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:53:20.637596 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:53:20.637602 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:53:20.637607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:53:20.637610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:53:20.637613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:53:20.637632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:53:20.637518 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:53:20.640494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:53:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:52:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:52:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:52:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:52:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.755605 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.755655 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.755677 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.755699 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.755711 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:44Z","lastTransitionTime":"2026-01-22T09:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.773767 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.773738212 podStartE2EDuration="1.773738212s" podCreationTimestamp="2026-01-22 09:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:53:44.771983893 +0000 UTC m=+97.215614160" watchObservedRunningTime="2026-01-22 09:53:44.773738212 +0000 UTC m=+97.217368479" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.855087 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-7sbs4" podStartSLOduration=73.855044843 podStartE2EDuration="1m13.855044843s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:53:44.82806512 +0000 UTC m=+97.271695387" watchObservedRunningTime="2026-01-22 09:53:44.855044843 +0000 UTC m=+97.298675110" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.859089 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.859165 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.859189 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.859213 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.859236 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:44Z","lastTransitionTime":"2026-01-22T09:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.882631 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=1.8826031429999999 podStartE2EDuration="1.882603143s" podCreationTimestamp="2026-01-22 09:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:53:44.85493564 +0000 UTC m=+97.298565907" watchObservedRunningTime="2026-01-22 09:53:44.882603143 +0000 UTC m=+97.326233410" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.900275 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.900253736 podStartE2EDuration="1.900253736s" podCreationTimestamp="2026-01-22 09:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:53:44.883383245 +0000 UTC m=+97.327013512" watchObservedRunningTime="2026-01-22 09:53:44.900253736 +0000 UTC m=+97.343883993" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.960978 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.961025 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.961035 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.961048 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.961056 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:44Z","lastTransitionTime":"2026-01-22T09:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:44 crc kubenswrapper[5101]: I0122 09:53:44.974398 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.974381497 podStartE2EDuration="1.974381497s" podCreationTimestamp="2026-01-22 09:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:53:44.973805071 +0000 UTC m=+97.417435348" watchObservedRunningTime="2026-01-22 09:53:44.974381497 +0000 UTC m=+97.418011764" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.022769 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.022844 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.022872 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.022917 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.022922 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.022945 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.022957 5101 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.023006 5101 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.023116 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.023129 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.023138 5101 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.023207 5101 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.023012 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:47.022997035 +0000 UTC m=+99.466627302 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.023270 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:47.023258542 +0000 UTC m=+99.466888919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.023283 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:47.023276472 +0000 UTC m=+99.466906739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.023295 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:47.023289463 +0000 UTC m=+99.466919730 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.056259 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" event={"ID":"8450e755-f74e-492f-8007-24e3410a8926","Type":"ContainerStarted","Data":"e642029df1e7996644ea562837d799e64f830ec7fdd5896604ec6d0b05e56220"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.056337 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" event={"ID":"8450e755-f74e-492f-8007-24e3410a8926","Type":"ContainerStarted","Data":"09623588d8e95874f2be2171758cb6d274cf2b67529a4b84230f502684bf3a35"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.057818 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"8e30b46bea42a6921bfc7748b7c4cfc4538f67e30e4cb656d2b608a23c8ba01e"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.059132 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-svfcw" event={"ID":"da0ffdf3-f312-4d31-853b-eae129062d58","Type":"ContainerStarted","Data":"4cf7ec33c554ca02260ceaff6125effdcb92a343fdd25675a7df74ec04328152"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.060588 5101 generic.go:358] "Generic (PLEG): container finished" podID="588311af-b91e-4596-931b-bcb1869b181a" containerID="ac08d1d256968baf34b8eb14b6704666679d3fae00dad7d8b2816be94ccebda9" exitCode=0 Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.060641 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" event={"ID":"588311af-b91e-4596-931b-bcb1869b181a","Type":"ContainerDied","Data":"ac08d1d256968baf34b8eb14b6704666679d3fae00dad7d8b2816be94ccebda9"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.061895 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-69rm7" event={"ID":"871e02bc-7882-434e-bd9f-e93a2375d495","Type":"ContainerStarted","Data":"3e49597a2c2a034a56435d8f5765b24d73a508182f3e2c73c12abcd37c889b0d"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.062746 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.062775 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.062787 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.062802 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.062813 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:45Z","lastTransitionTime":"2026-01-22T09:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.063814 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" event={"ID":"b6cc8637-155e-4b29-97f3-fe9a65c4a539","Type":"ContainerStarted","Data":"1d75aa97f1d5633e3ccfd950c2c52c6f7fa486479e7fed6257644991b613d680"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.063837 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" event={"ID":"b6cc8637-155e-4b29-97f3-fe9a65c4a539","Type":"ContainerStarted","Data":"82f1b0cee1939513cd81d59a6364816ffa9124837985948a3d457075b4ad7133"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.075046 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" event={"ID":"9ddc1292-91f9-4766-9422-1ccd8ae15b14","Type":"ContainerStarted","Data":"bdde638a81dd7cb8bf17035285bb304cc8f7e7640607ebb96bb4a1dfb1dbef82"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.164935 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.165318 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.165328 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.165344 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.165354 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:45Z","lastTransitionTime":"2026-01-22T09:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.226474 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.227003 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:53:47.226964102 +0000 UTC m=+99.670594379 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.267170 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.267219 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.267228 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.267243 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.267252 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:45Z","lastTransitionTime":"2026-01-22T09:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.271672 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" podStartSLOduration=74.2716337 podStartE2EDuration="1m14.2716337s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:53:45.21830363 +0000 UTC m=+97.661933897" watchObservedRunningTime="2026-01-22 09:53:45.2716337 +0000 UTC m=+97.715263967" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.308299 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-69rm7" podStartSLOduration=74.308277223 podStartE2EDuration="1m14.308277223s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:53:45.306850243 +0000 UTC m=+97.750480510" watchObservedRunningTime="2026-01-22 09:53:45.308277223 +0000 UTC m=+97.751907480" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.327537 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs\") pod \"network-metrics-daemon-2kpwn\" (UID: \"4d9d0a50-8eab-4184-b6dc-38872680242c\") " pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.327778 5101 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.327883 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs podName:4d9d0a50-8eab-4184-b6dc-38872680242c nodeName:}" failed. No retries permitted until 2026-01-22 09:53:47.32786071 +0000 UTC m=+99.771490977 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs") pod "network-metrics-daemon-2kpwn" (UID: "4d9d0a50-8eab-4184-b6dc-38872680242c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.369520 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.369565 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.369577 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.369602 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.369622 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:45Z","lastTransitionTime":"2026-01-22T09:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.418990 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-svfcw" podStartSLOduration=74.418971915 podStartE2EDuration="1m14.418971915s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:53:45.418532043 +0000 UTC m=+97.862162320" watchObservedRunningTime="2026-01-22 09:53:45.418971915 +0000 UTC m=+97.862602182" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.465688 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vkvtg" podStartSLOduration=73.465647529 podStartE2EDuration="1m13.465647529s" podCreationTimestamp="2026-01-22 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:53:45.464392033 +0000 UTC m=+97.908022300" watchObservedRunningTime="2026-01-22 09:53:45.465647529 +0000 UTC m=+97.909277806" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.472079 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.472129 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.472140 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.472158 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.472168 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:45Z","lastTransitionTime":"2026-01-22T09:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.528298 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:45 crc kubenswrapper[5101]: E0122 09:53:45.528537 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.578707 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.578774 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.578792 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.578813 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.578824 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:45Z","lastTransitionTime":"2026-01-22T09:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.681649 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.681707 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.681717 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.681739 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:45 crc kubenswrapper[5101]: I0122 09:53:45.681749 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:45Z","lastTransitionTime":"2026-01-22T09:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.033413 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.033804 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.033824 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.033843 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.033860 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:46Z","lastTransitionTime":"2026-01-22T09:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.110303 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" event={"ID":"588311af-b91e-4596-931b-bcb1869b181a","Type":"ContainerStarted","Data":"cd30c72d59a22482af04ceb81147a55e69fd58cca20b899b0c809ddbf8aeb38e"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.110346 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" event={"ID":"588311af-b91e-4596-931b-bcb1869b181a","Type":"ContainerStarted","Data":"8d41fd4c0e3d125ed87e5cd8081b372dad8246463ae401deac099be441fb0272"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.110356 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" event={"ID":"588311af-b91e-4596-931b-bcb1869b181a","Type":"ContainerStarted","Data":"964a0bc307a0c019ef032d58adfe73b295098f896d0d9ad09aeffd7cd68894e1"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.111325 5101 generic.go:358] "Generic (PLEG): container finished" podID="9ddc1292-91f9-4766-9422-1ccd8ae15b14" containerID="bdde638a81dd7cb8bf17035285bb304cc8f7e7640607ebb96bb4a1dfb1dbef82" exitCode=0 Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.111677 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" event={"ID":"9ddc1292-91f9-4766-9422-1ccd8ae15b14","Type":"ContainerDied","Data":"bdde638a81dd7cb8bf17035285bb304cc8f7e7640607ebb96bb4a1dfb1dbef82"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.144497 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.144536 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.144548 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.144564 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.144577 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:46Z","lastTransitionTime":"2026-01-22T09:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.266776 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.267158 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.267171 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.267195 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.267207 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:46Z","lastTransitionTime":"2026-01-22T09:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.375739 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.375800 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.375814 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.375832 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.375844 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:46Z","lastTransitionTime":"2026-01-22T09:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.478038 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.478078 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.478091 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.478105 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.478114 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:46Z","lastTransitionTime":"2026-01-22T09:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.537836 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:46 crc kubenswrapper[5101]: E0122 09:53:46.537961 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.538085 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:46 crc kubenswrapper[5101]: E0122 09:53:46.538219 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.538329 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:46 crc kubenswrapper[5101]: E0122 09:53:46.538398 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2kpwn" podUID="4d9d0a50-8eab-4184-b6dc-38872680242c" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.580472 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.580522 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.580543 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.580574 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.580593 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:46Z","lastTransitionTime":"2026-01-22T09:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.686945 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.687078 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.687149 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.687220 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.687308 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:46Z","lastTransitionTime":"2026-01-22T09:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.792583 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.792639 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.792656 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.792677 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.792689 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:46Z","lastTransitionTime":"2026-01-22T09:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.895132 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.895172 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.895181 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.895198 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.895206 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:46Z","lastTransitionTime":"2026-01-22T09:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.998325 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.998366 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.998377 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.998394 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:46 crc kubenswrapper[5101]: I0122 09:53:46.998407 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:46Z","lastTransitionTime":"2026-01-22T09:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.100952 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.101000 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.101011 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.101073 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.101087 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:47Z","lastTransitionTime":"2026-01-22T09:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.105121 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.105164 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.105186 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.105209 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.105314 5101 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.105358 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:51.105345169 +0000 UTC m=+103.548975436 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.105693 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.105709 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.105718 5101 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.105745 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:51.10573643 +0000 UTC m=+103.549366697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.105786 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.105793 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.105799 5101 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.105820 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:51.105814092 +0000 UTC m=+103.549444360 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.105847 5101 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.105865 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:51.105860764 +0000 UTC m=+103.549491031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.117625 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" event={"ID":"588311af-b91e-4596-931b-bcb1869b181a","Type":"ContainerStarted","Data":"c0d58a572ec82f4e603079fdccb97223b3671b2c13a1fa21c85be6adf1e25956"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.117676 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" event={"ID":"588311af-b91e-4596-931b-bcb1869b181a","Type":"ContainerStarted","Data":"0a447a531dd002ba8c9900fbe751d5b92c0bb00bd5408fcd45cccfaddb4bbf40"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.117688 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" event={"ID":"588311af-b91e-4596-931b-bcb1869b181a","Type":"ContainerStarted","Data":"a9c508fe681a44b8b55453049cfa4334979b0359999b15e9a411de4b557f4a13"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.118843 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" event={"ID":"9ddc1292-91f9-4766-9422-1ccd8ae15b14","Type":"ContainerStarted","Data":"704a20ba35161353b0ab69c4bb21a643ee310ec2e5d015f73543abe903829a49"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.203669 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.203991 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.204005 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.204022 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.204035 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:47Z","lastTransitionTime":"2026-01-22T09:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.312917 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.313188 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:53:51.313167134 +0000 UTC m=+103.756797401 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.315319 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.315381 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.315393 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.315410 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.315448 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:47Z","lastTransitionTime":"2026-01-22T09:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.414116 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs\") pod \"network-metrics-daemon-2kpwn\" (UID: \"4d9d0a50-8eab-4184-b6dc-38872680242c\") " pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.414357 5101 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.414463 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs podName:4d9d0a50-8eab-4184-b6dc-38872680242c nodeName:}" failed. No retries permitted until 2026-01-22 09:53:51.414412682 +0000 UTC m=+103.858042959 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs") pod "network-metrics-daemon-2kpwn" (UID: "4d9d0a50-8eab-4184-b6dc-38872680242c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.422485 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.422543 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.422556 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.422575 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.422587 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:47Z","lastTransitionTime":"2026-01-22T09:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.524352 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.524734 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.524805 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.524874 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.524948 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:47Z","lastTransitionTime":"2026-01-22T09:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.527655 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:47 crc kubenswrapper[5101]: E0122 09:53:47.527762 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.627240 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.627282 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.627291 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.627306 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.627315 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:47Z","lastTransitionTime":"2026-01-22T09:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.729703 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.729754 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.729764 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.729788 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.729808 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:47Z","lastTransitionTime":"2026-01-22T09:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.831769 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.832085 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.832197 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.832297 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.832360 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:47Z","lastTransitionTime":"2026-01-22T09:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.934457 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.934709 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.934807 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.934881 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:47 crc kubenswrapper[5101]: I0122 09:53:47.934943 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:47Z","lastTransitionTime":"2026-01-22T09:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.037730 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.037797 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.037822 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.037840 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.037853 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:48Z","lastTransitionTime":"2026-01-22T09:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.127824 5101 generic.go:358] "Generic (PLEG): container finished" podID="9ddc1292-91f9-4766-9422-1ccd8ae15b14" containerID="704a20ba35161353b0ab69c4bb21a643ee310ec2e5d015f73543abe903829a49" exitCode=0 Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.127914 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" event={"ID":"9ddc1292-91f9-4766-9422-1ccd8ae15b14","Type":"ContainerDied","Data":"704a20ba35161353b0ab69c4bb21a643ee310ec2e5d015f73543abe903829a49"} Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.140292 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.140341 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.140353 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.140372 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.140384 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:48Z","lastTransitionTime":"2026-01-22T09:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.283352 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.283403 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.283415 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.283457 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.283472 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:48Z","lastTransitionTime":"2026-01-22T09:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.385869 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.385927 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.385939 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.385954 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.385963 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:48Z","lastTransitionTime":"2026-01-22T09:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.487670 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.487714 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.487725 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.487742 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.487768 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:48Z","lastTransitionTime":"2026-01-22T09:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.504151 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.504194 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.504207 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.504222 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.504234 5101 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:53:48Z","lastTransitionTime":"2026-01-22T09:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.530068 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.530109 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:48 crc kubenswrapper[5101]: E0122 09:53:48.530194 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 09:53:48 crc kubenswrapper[5101]: E0122 09:53:48.530334 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.530452 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:48 crc kubenswrapper[5101]: E0122 09:53:48.530607 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2kpwn" podUID="4d9d0a50-8eab-4184-b6dc-38872680242c" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.557013 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b"] Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.560029 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.561874 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.562461 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.562497 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.562529 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.727931 5101 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.734375 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.734444 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.734498 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.734522 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.734551 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.735930 5101 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.835792 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.835863 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.835982 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.836236 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.836291 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.836321 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.837006 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.838179 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.848931 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.862910 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dded53a4-cd39-4de8-9c8d-2ceea0c81fd2-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-zwx7b\" (UID: \"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: I0122 09:53:48.906182 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" Jan 22 09:53:48 crc kubenswrapper[5101]: W0122 09:53:48.920739 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddded53a4_cd39_4de8_9c8d_2ceea0c81fd2.slice/crio-b05683201334948e6159d2c8681d8b70842752f6c24c959488d583dce56e0fc9 WatchSource:0}: Error finding container b05683201334948e6159d2c8681d8b70842752f6c24c959488d583dce56e0fc9: Status 404 returned error can't find the container with id b05683201334948e6159d2c8681d8b70842752f6c24c959488d583dce56e0fc9 Jan 22 09:53:49 crc kubenswrapper[5101]: I0122 09:53:49.139575 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" event={"ID":"588311af-b91e-4596-931b-bcb1869b181a","Type":"ContainerStarted","Data":"93659da2d97e21db071d94bd864538194d0c03b0f983454e2db673c9e1828a7c"} Jan 22 09:53:49 crc kubenswrapper[5101]: I0122 09:53:49.142092 5101 generic.go:358] "Generic (PLEG): container finished" podID="9ddc1292-91f9-4766-9422-1ccd8ae15b14" containerID="8b0d73610e17b9f6d5276765a55f80aa587087f330a28f2f58918f2bddfe90e5" exitCode=0 Jan 22 09:53:49 crc kubenswrapper[5101]: I0122 09:53:49.142295 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" event={"ID":"9ddc1292-91f9-4766-9422-1ccd8ae15b14","Type":"ContainerDied","Data":"8b0d73610e17b9f6d5276765a55f80aa587087f330a28f2f58918f2bddfe90e5"} Jan 22 09:53:49 crc kubenswrapper[5101]: I0122 09:53:49.145114 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" event={"ID":"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2","Type":"ContainerStarted","Data":"d67ee9fd7e06a9e12f35193ecccf65fc3ae7cde41ed095e749d0ce7b905b9d5c"} Jan 22 09:53:49 crc kubenswrapper[5101]: I0122 09:53:49.145158 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" event={"ID":"dded53a4-cd39-4de8-9c8d-2ceea0c81fd2","Type":"ContainerStarted","Data":"b05683201334948e6159d2c8681d8b70842752f6c24c959488d583dce56e0fc9"} Jan 22 09:53:49 crc kubenswrapper[5101]: I0122 09:53:49.146865 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"62a15845bb736d505c63a1332a1a52bbe15d847e6a23b77d16898df1a2278d38"} Jan 22 09:53:49 crc kubenswrapper[5101]: I0122 09:53:49.204567 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-zwx7b" podStartSLOduration=78.204545944 podStartE2EDuration="1m18.204545944s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:53:49.181708176 +0000 UTC m=+101.625338453" watchObservedRunningTime="2026-01-22 09:53:49.204545944 +0000 UTC m=+101.648176211" Jan 22 09:53:49 crc kubenswrapper[5101]: I0122 09:53:49.527634 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:49 crc kubenswrapper[5101]: E0122 09:53:49.527828 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 09:53:50 crc kubenswrapper[5101]: I0122 09:53:50.161974 5101 generic.go:358] "Generic (PLEG): container finished" podID="9ddc1292-91f9-4766-9422-1ccd8ae15b14" containerID="37f00b6c1144f94cd43bcdf325322fe086235778d4151de7f5be97ad69a8682a" exitCode=0 Jan 22 09:53:50 crc kubenswrapper[5101]: I0122 09:53:50.162899 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" event={"ID":"9ddc1292-91f9-4766-9422-1ccd8ae15b14","Type":"ContainerDied","Data":"37f00b6c1144f94cd43bcdf325322fe086235778d4151de7f5be97ad69a8682a"} Jan 22 09:53:50 crc kubenswrapper[5101]: I0122 09:53:50.569884 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:50 crc kubenswrapper[5101]: E0122 09:53:50.570325 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 09:53:50 crc kubenswrapper[5101]: I0122 09:53:50.571141 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:50 crc kubenswrapper[5101]: E0122 09:53:50.571216 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2kpwn" podUID="4d9d0a50-8eab-4184-b6dc-38872680242c" Jan 22 09:53:50 crc kubenswrapper[5101]: I0122 09:53:50.571521 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:50 crc kubenswrapper[5101]: E0122 09:53:50.571643 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 09:53:51 crc kubenswrapper[5101]: I0122 09:53:51.175759 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:51 crc kubenswrapper[5101]: I0122 09:53:51.175805 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:51 crc kubenswrapper[5101]: I0122 09:53:51.175832 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.176066 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.176116 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.176136 5101 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.176240 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:59.176193726 +0000 UTC m=+111.619824033 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:51 crc kubenswrapper[5101]: I0122 09:53:51.176865 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.177065 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.177086 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.177101 5101 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.177160 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:59.177143873 +0000 UTC m=+111.620774180 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.177248 5101 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.177291 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:59.177279066 +0000 UTC m=+111.620909373 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.177346 5101 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.177385 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 09:53:59.177374729 +0000 UTC m=+111.621005036 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:53:51 crc kubenswrapper[5101]: I0122 09:53:51.190615 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" event={"ID":"9ddc1292-91f9-4766-9422-1ccd8ae15b14","Type":"ContainerStarted","Data":"166ccd6e0cd0220ce91f70de527be740155a67d8da403e588cfadd2c8df721f3"} Jan 22 09:53:51 crc kubenswrapper[5101]: I0122 09:53:51.380237 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.380469 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:53:59.380449612 +0000 UTC m=+111.824079879 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:53:51 crc kubenswrapper[5101]: I0122 09:53:51.481149 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs\") pod \"network-metrics-daemon-2kpwn\" (UID: \"4d9d0a50-8eab-4184-b6dc-38872680242c\") " pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.481344 5101 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.481415 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs podName:4d9d0a50-8eab-4184-b6dc-38872680242c nodeName:}" failed. No retries permitted until 2026-01-22 09:53:59.48139922 +0000 UTC m=+111.925029477 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs") pod "network-metrics-daemon-2kpwn" (UID: "4d9d0a50-8eab-4184-b6dc-38872680242c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:53:51 crc kubenswrapper[5101]: I0122 09:53:51.527567 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:51 crc kubenswrapper[5101]: E0122 09:53:51.527771 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 09:53:52 crc kubenswrapper[5101]: I0122 09:53:52.198873 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" event={"ID":"588311af-b91e-4596-931b-bcb1869b181a","Type":"ContainerStarted","Data":"f747ef51866b1eb3cd2b8be9801a77280988f7ad66ed19ab04e8768f814ed9ed"} Jan 22 09:53:52 crc kubenswrapper[5101]: I0122 09:53:52.235883 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" podStartSLOduration=81.235865684 podStartE2EDuration="1m21.235865684s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:53:52.235584646 +0000 UTC m=+104.679214933" watchObservedRunningTime="2026-01-22 09:53:52.235865684 +0000 UTC m=+104.679495941" Jan 22 09:53:52 crc kubenswrapper[5101]: I0122 09:53:52.528024 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:52 crc kubenswrapper[5101]: I0122 09:53:52.528059 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:52 crc kubenswrapper[5101]: I0122 09:53:52.528060 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:52 crc kubenswrapper[5101]: E0122 09:53:52.528596 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2kpwn" podUID="4d9d0a50-8eab-4184-b6dc-38872680242c" Jan 22 09:53:52 crc kubenswrapper[5101]: E0122 09:53:52.528704 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 09:53:52 crc kubenswrapper[5101]: E0122 09:53:52.528910 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 09:53:53 crc kubenswrapper[5101]: I0122 09:53:53.203150 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:53 crc kubenswrapper[5101]: I0122 09:53:53.203752 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:53 crc kubenswrapper[5101]: I0122 09:53:53.203765 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:53 crc kubenswrapper[5101]: I0122 09:53:53.353797 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:53 crc kubenswrapper[5101]: I0122 09:53:53.362330 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:53:53 crc kubenswrapper[5101]: I0122 09:53:53.564621 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:53 crc kubenswrapper[5101]: E0122 09:53:53.564768 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 09:53:53 crc kubenswrapper[5101]: I0122 09:53:53.564927 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:53 crc kubenswrapper[5101]: E0122 09:53:53.565043 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 09:53:54 crc kubenswrapper[5101]: I0122 09:53:54.208255 5101 generic.go:358] "Generic (PLEG): container finished" podID="9ddc1292-91f9-4766-9422-1ccd8ae15b14" containerID="166ccd6e0cd0220ce91f70de527be740155a67d8da403e588cfadd2c8df721f3" exitCode=0 Jan 22 09:53:54 crc kubenswrapper[5101]: I0122 09:53:54.208321 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" event={"ID":"9ddc1292-91f9-4766-9422-1ccd8ae15b14","Type":"ContainerDied","Data":"166ccd6e0cd0220ce91f70de527be740155a67d8da403e588cfadd2c8df721f3"} Jan 22 09:53:54 crc kubenswrapper[5101]: I0122 09:53:54.620664 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:54 crc kubenswrapper[5101]: I0122 09:53:54.620715 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:54 crc kubenswrapper[5101]: E0122 09:53:54.620845 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 09:53:54 crc kubenswrapper[5101]: E0122 09:53:54.620988 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2kpwn" podUID="4d9d0a50-8eab-4184-b6dc-38872680242c" Jan 22 09:53:55 crc kubenswrapper[5101]: I0122 09:53:55.214928 5101 generic.go:358] "Generic (PLEG): container finished" podID="9ddc1292-91f9-4766-9422-1ccd8ae15b14" containerID="49bd12417494f28c8fe895fb42dbcfb633180dca47d537a34b5ef17ac937877c" exitCode=0 Jan 22 09:53:55 crc kubenswrapper[5101]: I0122 09:53:55.215040 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" event={"ID":"9ddc1292-91f9-4766-9422-1ccd8ae15b14","Type":"ContainerDied","Data":"49bd12417494f28c8fe895fb42dbcfb633180dca47d537a34b5ef17ac937877c"} Jan 22 09:53:55 crc kubenswrapper[5101]: I0122 09:53:55.528273 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:55 crc kubenswrapper[5101]: I0122 09:53:55.528350 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:55 crc kubenswrapper[5101]: E0122 09:53:55.528466 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 09:53:55 crc kubenswrapper[5101]: E0122 09:53:55.528538 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 09:53:56 crc kubenswrapper[5101]: I0122 09:53:56.265322 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2kpwn"] Jan 22 09:53:56 crc kubenswrapper[5101]: I0122 09:53:56.266312 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:56 crc kubenswrapper[5101]: E0122 09:53:56.267368 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2kpwn" podUID="4d9d0a50-8eab-4184-b6dc-38872680242c" Jan 22 09:53:56 crc kubenswrapper[5101]: I0122 09:53:56.528478 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:56 crc kubenswrapper[5101]: E0122 09:53:56.528633 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 09:53:57 crc kubenswrapper[5101]: I0122 09:53:57.227665 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-642cb" event={"ID":"9ddc1292-91f9-4766-9422-1ccd8ae15b14","Type":"ContainerStarted","Data":"15b448efdd12b0d833268acb6d124327f6eb6cdcf466814f79620ca9299292a8"} Jan 22 09:53:57 crc kubenswrapper[5101]: I0122 09:53:57.527684 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:57 crc kubenswrapper[5101]: E0122 09:53:57.527834 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 09:53:57 crc kubenswrapper[5101]: I0122 09:53:57.528005 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:57 crc kubenswrapper[5101]: E0122 09:53:57.528203 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 09:53:57 crc kubenswrapper[5101]: I0122 09:53:57.529407 5101 scope.go:117] "RemoveContainer" containerID="f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf" Jan 22 09:53:57 crc kubenswrapper[5101]: E0122 09:53:57.529947 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 09:53:58 crc kubenswrapper[5101]: I0122 09:53:58.534347 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:58 crc kubenswrapper[5101]: I0122 09:53:58.534347 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:58 crc kubenswrapper[5101]: E0122 09:53:58.535835 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2kpwn" podUID="4d9d0a50-8eab-4184-b6dc-38872680242c" Jan 22 09:53:58 crc kubenswrapper[5101]: E0122 09:53:58.536442 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 09:53:59 crc kubenswrapper[5101]: I0122 09:53:59.184754 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.185050 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.185213 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.185232 5101 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.185304 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.185284775 +0000 UTC m=+127.628915042 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:59 crc kubenswrapper[5101]: I0122 09:53:59.185151 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:59 crc kubenswrapper[5101]: I0122 09:53:59.185793 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.185841 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.185901 5101 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.185919 5101 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.185939 5101 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:59 crc kubenswrapper[5101]: I0122 09:53:59.185968 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.185940 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.185930183 +0000 UTC m=+127.629560450 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.186046 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.186032126 +0000 UTC m=+127.629662393 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.186167 5101 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.186225 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.186214211 +0000 UTC m=+127.629844478 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:53:59 crc kubenswrapper[5101]: I0122 09:53:59.388779 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.389168 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.389145249 +0000 UTC m=+127.832775516 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:53:59 crc kubenswrapper[5101]: I0122 09:53:59.490597 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs\") pod \"network-metrics-daemon-2kpwn\" (UID: \"4d9d0a50-8eab-4184-b6dc-38872680242c\") " pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.490787 5101 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.491172 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs podName:4d9d0a50-8eab-4184-b6dc-38872680242c nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.491150449 +0000 UTC m=+127.934780716 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs") pod "network-metrics-daemon-2kpwn" (UID: "4d9d0a50-8eab-4184-b6dc-38872680242c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:53:59 crc kubenswrapper[5101]: I0122 09:53:59.528261 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:53:59 crc kubenswrapper[5101]: I0122 09:53:59.528272 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.528455 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 09:53:59 crc kubenswrapper[5101]: E0122 09:53:59.528627 5101 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.321940 5101 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.322189 5101 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.357861 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-642cb" podStartSLOduration=89.357837697 podStartE2EDuration="1m29.357837697s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:53:57.257212111 +0000 UTC m=+109.700842388" watchObservedRunningTime="2026-01-22 09:54:00.357837697 +0000 UTC m=+112.801467964" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.359233 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-w5j22"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.370334 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-64f6k"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.370956 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.373555 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.373920 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.376241 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.376413 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.378220 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.378735 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-66wpn"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.378949 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.379876 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.379908 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.380263 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.380360 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.380488 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.380533 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.380745 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-64f6k"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.380849 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.380975 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.381380 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.381514 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.381526 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.381561 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.384716 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.385818 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.386236 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.391645 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.392096 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.392307 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.392554 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.393320 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.393524 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.392617 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.393443 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.394301 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.394312 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.394473 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.394640 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.394788 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.395134 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.395266 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.395483 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.395666 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.395864 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.395983 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.396108 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.400898 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.403460 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.404473 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.424229 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.424480 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-8hrzs"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.424958 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.425629 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.427677 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.429716 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.430770 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.433156 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-w2759"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.435669 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.436004 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.436049 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-w2759" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.437331 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.437470 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.438144 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.438187 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.438333 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.438336 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.439384 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.439742 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.440316 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.440542 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.441100 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-7gkpq"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.449777 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.450761 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.450776 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-mgr24"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.450979 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.451185 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.451789 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.452080 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.452262 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.452442 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.453243 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.454753 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.454897 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.455450 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.455906 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.455957 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.456904 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.457094 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.458193 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.463292 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.463652 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.465824 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.466011 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.466175 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.466355 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.466522 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.466698 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.466808 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.468047 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.471076 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-7pcd5"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.472114 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.474614 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.475230 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-rgqgl"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.475333 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.475403 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.477352 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.482692 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.487648 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.507060 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.507108 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.509967 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.510667 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sp4f\" (UniqueName: \"kubernetes.io/projected/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-kube-api-access-6sp4f\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.510737 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/40ddbf39-c363-4a9d-90d2-911b700eb8d1-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-66wpn\" (UID: \"40ddbf39-c363-4a9d-90d2-911b700eb8d1\") " pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.510777 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf9h5\" (UniqueName: \"kubernetes.io/projected/112f7c63-b876-4377-8418-18d8abc92100-kube-api-access-sf9h5\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.510812 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbh5z\" (UniqueName: \"kubernetes.io/projected/79b5eb1b-bf45-47ce-992d-4c1bae056fc5-kube-api-access-jbh5z\") pod \"console-operator-67c89758df-8hrzs\" (UID: \"79b5eb1b-bf45-47ce-992d-4c1bae056fc5\") " pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.510856 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-tmp\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.510890 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-config\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511166 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/112f7c63-b876-4377-8418-18d8abc92100-audit-dir\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511200 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k57gx\" (UniqueName: \"kubernetes.io/projected/ada11655-156b-4b1e-ad19-8391c89c8e6b-kube-api-access-k57gx\") pod \"downloads-747b44746d-w2759\" (UID: \"ada11655-156b-4b1e-ad19-8391c89c8e6b\") " pod="openshift-console/downloads-747b44746d-w2759" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511236 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79b5eb1b-bf45-47ce-992d-4c1bae056fc5-trusted-ca\") pod \"console-operator-67c89758df-8hrzs\" (UID: \"79b5eb1b-bf45-47ce-992d-4c1bae056fc5\") " pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511259 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511395 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511445 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223b7c4c-942e-44bd-bf88-67db1adfed29-config\") pod \"openshift-apiserver-operator-846cbfc458-k4jfz\" (UID: \"223b7c4c-942e-44bd-bf88-67db1adfed29\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511498 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c81934b-984b-4537-b93e-ecec345fdf73-etcd-client\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511527 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c81934b-984b-4537-b93e-ecec345fdf73-encryption-config\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511562 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/40ddbf39-c363-4a9d-90d2-911b700eb8d1-images\") pod \"machine-api-operator-755bb95488-66wpn\" (UID: \"40ddbf39-c363-4a9d-90d2-911b700eb8d1\") " pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511586 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511874 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511913 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79b5eb1b-bf45-47ce-992d-4c1bae056fc5-serving-cert\") pod \"console-operator-67c89758df-8hrzs\" (UID: \"79b5eb1b-bf45-47ce-992d-4c1bae056fc5\") " pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.511941 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c81934b-984b-4537-b93e-ecec345fdf73-serving-cert\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512003 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4w7n\" (UniqueName: \"kubernetes.io/projected/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-kube-api-access-t4w7n\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512032 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-serving-cert\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512064 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv5j6\" (UniqueName: \"kubernetes.io/projected/40ddbf39-c363-4a9d-90d2-911b700eb8d1-kube-api-access-nv5j6\") pod \"machine-api-operator-755bb95488-66wpn\" (UID: \"40ddbf39-c363-4a9d-90d2-911b700eb8d1\") " pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512093 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-config\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512139 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8dcs\" (UniqueName: \"kubernetes.io/projected/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-kube-api-access-f8dcs\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512382 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512452 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/112f7c63-b876-4377-8418-18d8abc92100-node-pullsecrets\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512479 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-image-import-ca\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512505 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-tmp\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512609 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512783 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-config\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512811 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-serving-cert\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512837 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1c81934b-984b-4537-b93e-ecec345fdf73-audit-policies\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.512861 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c81934b-984b-4537-b93e-ecec345fdf73-trusted-ca-bundle\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.513083 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrpsr\" (UniqueName: \"kubernetes.io/projected/1c81934b-984b-4537-b93e-ecec345fdf73-kube-api-access-xrpsr\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.513119 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-client-ca\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.513144 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-serving-cert\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.513177 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.513254 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.513203 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/112f7c63-b876-4377-8418-18d8abc92100-serving-cert\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.517065 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/112f7c63-b876-4377-8418-18d8abc92100-etcd-client\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.517131 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c81934b-984b-4537-b93e-ecec345fdf73-audit-dir\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.517177 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223b7c4c-942e-44bd-bf88-67db1adfed29-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-k4jfz\" (UID: \"223b7c4c-942e-44bd-bf88-67db1adfed29\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.517245 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40ddbf39-c363-4a9d-90d2-911b700eb8d1-config\") pod \"machine-api-operator-755bb95488-66wpn\" (UID: \"40ddbf39-c363-4a9d-90d2-911b700eb8d1\") " pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.521030 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.521088 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.521439 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.517319 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/112f7c63-b876-4377-8418-18d8abc92100-encryption-config\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.522038 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c81934b-984b-4537-b93e-ecec345fdf73-etcd-serving-ca\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.522229 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.522215 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-config\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.579459 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-client-ca\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.579528 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b5eb1b-bf45-47ce-992d-4c1bae056fc5-config\") pod \"console-operator-67c89758df-8hrzs\" (UID: \"79b5eb1b-bf45-47ce-992d-4c1bae056fc5\") " pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.579559 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-audit\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.579587 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8t64\" (UniqueName: \"kubernetes.io/projected/223b7c4c-942e-44bd-bf88-67db1adfed29-kube-api-access-v8t64\") pod \"openshift-apiserver-operator-846cbfc458-k4jfz\" (UID: \"223b7c4c-942e-44bd-bf88-67db1adfed29\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.580466 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.580633 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.580741 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.581161 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.581219 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.581267 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.581172 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.581537 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.583518 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-x59wv"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.583635 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.583640 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.583860 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.584978 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.585659 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.590900 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.591070 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.591250 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.591289 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.591370 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.591500 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.591535 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.591697 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.591980 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.592265 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.592547 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.593249 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.595778 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.599945 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.615365 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.620097 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-66wpn"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.620140 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.620155 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-hwdqt"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.620779 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.624509 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.624929 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.627507 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-w5j22"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.627542 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.627727 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.630367 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.630392 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.630545 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.633215 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.634499 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.634711 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.639571 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-45b99"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.639746 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.642591 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.642756 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-45b99" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.645602 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-8hrzs"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.645631 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.645800 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.648519 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.648968 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.652157 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.652439 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.653174 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.654907 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ss5t9"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.655078 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.657357 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.657536 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.662150 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.662320 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.671956 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-gl7dl"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.672200 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682071 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79b5eb1b-bf45-47ce-992d-4c1bae056fc5-serving-cert\") pod \"console-operator-67c89758df-8hrzs\" (UID: \"79b5eb1b-bf45-47ce-992d-4c1bae056fc5\") " pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682121 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c81934b-984b-4537-b93e-ecec345fdf73-serving-cert\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682151 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba64c46a-5bbe-470e-8dcd-560c5f1ddf59-metrics-tls\") pod \"dns-operator-799b87ffcd-rgqgl\" (UID: \"ba64c46a-5bbe-470e-8dcd-560c5f1ddf59\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682415 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t4w7n\" (UniqueName: \"kubernetes.io/projected/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-kube-api-access-t4w7n\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682459 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-serving-cert\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682481 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682516 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nv5j6\" (UniqueName: \"kubernetes.io/projected/40ddbf39-c363-4a9d-90d2-911b700eb8d1-kube-api-access-nv5j6\") pod \"machine-api-operator-755bb95488-66wpn\" (UID: \"40ddbf39-c363-4a9d-90d2-911b700eb8d1\") " pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682534 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-config\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682567 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f8dcs\" (UniqueName: \"kubernetes.io/projected/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-kube-api-access-f8dcs\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682819 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b4g5\" (UniqueName: \"kubernetes.io/projected/9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f-kube-api-access-9b4g5\") pod \"machine-approver-54c688565-mgr24\" (UID: \"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682899 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682956 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.682987 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/112f7c63-b876-4377-8418-18d8abc92100-node-pullsecrets\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683013 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-image-import-ca\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683048 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-tmp\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683097 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-config\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683120 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-serving-cert\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683144 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1c81934b-984b-4537-b93e-ecec345fdf73-audit-policies\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683164 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c81934b-984b-4537-b93e-ecec345fdf73-trusted-ca-bundle\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683200 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683229 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xrpsr\" (UniqueName: \"kubernetes.io/projected/1c81934b-984b-4537-b93e-ecec345fdf73-kube-api-access-xrpsr\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683263 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd43162d-92a7-42ed-8615-ce99aaf16067-config\") pod \"openshift-kube-scheduler-operator-54f497555d-bxwd2\" (UID: \"bd43162d-92a7-42ed-8615-ce99aaf16067\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683293 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-client-ca\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683317 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-serving-cert\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683341 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f-config\") pod \"machine-approver-54c688565-mgr24\" (UID: \"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683365 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-audit-dir\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683391 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683445 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683410 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683573 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/112f7c63-b876-4377-8418-18d8abc92100-node-pullsecrets\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.684110 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.684343 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-config\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.684718 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-tmp\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.685152 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c81934b-984b-4537-b93e-ecec345fdf73-trusted-ca-bundle\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.685714 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1c81934b-984b-4537-b93e-ecec345fdf73-audit-policies\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.686309 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-client-ca\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.686636 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-image-import-ca\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.683463 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/112f7c63-b876-4377-8418-18d8abc92100-serving-cert\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687041 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/112f7c63-b876-4377-8418-18d8abc92100-etcd-client\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687063 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c81934b-984b-4537-b93e-ecec345fdf73-audit-dir\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687080 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223b7c4c-942e-44bd-bf88-67db1adfed29-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-k4jfz\" (UID: \"223b7c4c-942e-44bd-bf88-67db1adfed29\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687119 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40ddbf39-c363-4a9d-90d2-911b700eb8d1-config\") pod \"machine-api-operator-755bb95488-66wpn\" (UID: \"40ddbf39-c363-4a9d-90d2-911b700eb8d1\") " pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687141 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/112f7c63-b876-4377-8418-18d8abc92100-encryption-config\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687157 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c81934b-984b-4537-b93e-ecec345fdf73-etcd-serving-ca\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687191 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-config\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687211 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-client-ca\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687232 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b5eb1b-bf45-47ce-992d-4c1bae056fc5-config\") pod \"console-operator-67c89758df-8hrzs\" (UID: \"79b5eb1b-bf45-47ce-992d-4c1bae056fc5\") " pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687254 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd43162d-92a7-42ed-8615-ce99aaf16067-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-bxwd2\" (UID: \"bd43162d-92a7-42ed-8615-ce99aaf16067\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687278 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqmvt\" (UniqueName: \"kubernetes.io/projected/ba64c46a-5bbe-470e-8dcd-560c5f1ddf59-kube-api-access-cqmvt\") pod \"dns-operator-799b87ffcd-rgqgl\" (UID: \"ba64c46a-5bbe-470e-8dcd-560c5f1ddf59\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687304 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-audit\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687322 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nlfv\" (UniqueName: \"kubernetes.io/projected/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-kube-api-access-8nlfv\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687345 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v8t64\" (UniqueName: \"kubernetes.io/projected/223b7c4c-942e-44bd-bf88-67db1adfed29-kube-api-access-v8t64\") pod \"openshift-apiserver-operator-846cbfc458-k4jfz\" (UID: \"223b7c4c-942e-44bd-bf88-67db1adfed29\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687364 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687370 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-config\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687391 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6sp4f\" (UniqueName: \"kubernetes.io/projected/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-kube-api-access-6sp4f\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687411 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/40ddbf39-c363-4a9d-90d2-911b700eb8d1-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-66wpn\" (UID: \"40ddbf39-c363-4a9d-90d2-911b700eb8d1\") " pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687450 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sf9h5\" (UniqueName: \"kubernetes.io/projected/112f7c63-b876-4377-8418-18d8abc92100-kube-api-access-sf9h5\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687470 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f-auth-proxy-config\") pod \"machine-approver-54c688565-mgr24\" (UID: \"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687489 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687512 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jbh5z\" (UniqueName: \"kubernetes.io/projected/79b5eb1b-bf45-47ce-992d-4c1bae056fc5-kube-api-access-jbh5z\") pod \"console-operator-67c89758df-8hrzs\" (UID: \"79b5eb1b-bf45-47ce-992d-4c1bae056fc5\") " pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687530 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687549 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687573 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-tmp\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687589 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-config\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687607 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/112f7c63-b876-4377-8418-18d8abc92100-audit-dir\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687627 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k57gx\" (UniqueName: \"kubernetes.io/projected/ada11655-156b-4b1e-ad19-8391c89c8e6b-kube-api-access-k57gx\") pod \"downloads-747b44746d-w2759\" (UID: \"ada11655-156b-4b1e-ad19-8391c89c8e6b\") " pod="openshift-console/downloads-747b44746d-w2759" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687649 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79b5eb1b-bf45-47ce-992d-4c1bae056fc5-trusted-ca\") pod \"console-operator-67c89758df-8hrzs\" (UID: \"79b5eb1b-bf45-47ce-992d-4c1bae056fc5\") " pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687665 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687681 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223b7c4c-942e-44bd-bf88-67db1adfed29-config\") pod \"openshift-apiserver-operator-846cbfc458-k4jfz\" (UID: \"223b7c4c-942e-44bd-bf88-67db1adfed29\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687703 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f-machine-approver-tls\") pod \"machine-approver-54c688565-mgr24\" (UID: \"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687720 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-audit-policies\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687737 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c81934b-984b-4537-b93e-ecec345fdf73-etcd-client\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687752 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c81934b-984b-4537-b93e-ecec345fdf73-encryption-config\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687771 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687790 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bd43162d-92a7-42ed-8615-ce99aaf16067-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-bxwd2\" (UID: \"bd43162d-92a7-42ed-8615-ce99aaf16067\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687808 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd43162d-92a7-42ed-8615-ce99aaf16067-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-bxwd2\" (UID: \"bd43162d-92a7-42ed-8615-ce99aaf16067\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687825 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ba64c46a-5bbe-470e-8dcd-560c5f1ddf59-tmp-dir\") pod \"dns-operator-799b87ffcd-rgqgl\" (UID: \"ba64c46a-5bbe-470e-8dcd-560c5f1ddf59\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687848 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/40ddbf39-c363-4a9d-90d2-911b700eb8d1-images\") pod \"machine-api-operator-755bb95488-66wpn\" (UID: \"40ddbf39-c363-4a9d-90d2-911b700eb8d1\") " pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687890 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687912 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/61c87129-51d7-446d-ac4a-d0f7c4e7a3f5-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-j478l\" (UID: \"61c87129-51d7-446d-ac4a-d0f7c4e7a3f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687956 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687974 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz48z\" (UniqueName: \"kubernetes.io/projected/61c87129-51d7-446d-ac4a-d0f7c4e7a3f5-kube-api-access-zz48z\") pod \"cluster-samples-operator-6b564684c8-j478l\" (UID: \"61c87129-51d7-446d-ac4a-d0f7c4e7a3f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.687996 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.688017 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.688190 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-client-ca\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.688550 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c81934b-984b-4537-b93e-ecec345fdf73-serving-cert\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.688569 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.688585 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79b5eb1b-bf45-47ce-992d-4c1bae056fc5-serving-cert\") pod \"console-operator-67c89758df-8hrzs\" (UID: \"79b5eb1b-bf45-47ce-992d-4c1bae056fc5\") " pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.688662 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/112f7c63-b876-4377-8418-18d8abc92100-audit-dir\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.688758 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.688893 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/112f7c63-b876-4377-8418-18d8abc92100-serving-cert\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.689362 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b5eb1b-bf45-47ce-992d-4c1bae056fc5-config\") pod \"console-operator-67c89758df-8hrzs\" (UID: \"79b5eb1b-bf45-47ce-992d-4c1bae056fc5\") " pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.689590 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c81934b-984b-4537-b93e-ecec345fdf73-etcd-serving-ca\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.689700 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79b5eb1b-bf45-47ce-992d-4c1bae056fc5-trusted-ca\") pod \"console-operator-67c89758df-8hrzs\" (UID: \"79b5eb1b-bf45-47ce-992d-4c1bae056fc5\") " pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.689709 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40ddbf39-c363-4a9d-90d2-911b700eb8d1-config\") pod \"machine-api-operator-755bb95488-66wpn\" (UID: \"40ddbf39-c363-4a9d-90d2-911b700eb8d1\") " pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.689950 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-tmp\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.690582 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223b7c4c-942e-44bd-bf88-67db1adfed29-config\") pod \"openshift-apiserver-operator-846cbfc458-k4jfz\" (UID: \"223b7c4c-942e-44bd-bf88-67db1adfed29\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.690916 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-serving-cert\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.691574 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-serving-cert\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.691642 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-audit\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.691887 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/40ddbf39-c363-4a9d-90d2-911b700eb8d1-images\") pod \"machine-api-operator-755bb95488-66wpn\" (UID: \"40ddbf39-c363-4a9d-90d2-911b700eb8d1\") " pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.691952 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c81934b-984b-4537-b93e-ecec345fdf73-audit-dir\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.692418 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-config\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.692614 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.692707 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-config\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.692738 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/112f7c63-b876-4377-8418-18d8abc92100-encryption-config\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.693138 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.693252 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/112f7c63-b876-4377-8418-18d8abc92100-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.693304 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/40ddbf39-c363-4a9d-90d2-911b700eb8d1-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-66wpn\" (UID: \"40ddbf39-c363-4a9d-90d2-911b700eb8d1\") " pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.693403 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c81934b-984b-4537-b93e-ecec345fdf73-encryption-config\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.693407 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.693677 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.694942 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c81934b-984b-4537-b93e-ecec345fdf73-etcd-client\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.695402 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/112f7c63-b876-4377-8418-18d8abc92100-etcd-client\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.701186 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-jrw7k"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.701228 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-serving-cert\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.701469 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.702750 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223b7c4c-942e-44bd-bf88-67db1adfed29-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-k4jfz\" (UID: \"223b7c4c-942e-44bd-bf88-67db1adfed29\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.704793 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.704839 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-w2759"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.704856 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.705782 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.708008 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-z4tq2"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.708248 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.711625 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.711751 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.713854 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.714256 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-4lvr8"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.714402 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.716989 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gf9jd"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.717211 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.720652 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-rgqgl"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.720740 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-x59wv"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.720799 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.720802 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-7gkpq"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.721014 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.721038 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.721053 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.721067 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.721080 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.721093 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-hwdqt"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.721106 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.721117 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.721130 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l6rf4"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.726583 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4q7cw"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.726675 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.729110 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.729144 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.729160 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-gl7dl"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.729173 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-45b99"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.729185 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.729204 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4lvr8"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.729216 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-7pcd5"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.729231 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.729249 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xzmmw"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.729410 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4q7cw" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.731807 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.731836 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.731853 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ss5t9"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.731869 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.731881 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4q7cw"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.731896 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.731904 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.731909 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.732016 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.732028 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.732041 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-z4tq2"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.732054 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gf9jd"] Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.733752 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.753999 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.773457 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.788893 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f-config\") pod \"machine-approver-54c688565-mgr24\" (UID: \"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.788937 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-audit-dir\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.788966 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789004 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-registry-tls\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789056 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b182bd55-8225-4386-aa02-40b8c9358df5-trusted-ca\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789097 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd43162d-92a7-42ed-8615-ce99aaf16067-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-bxwd2\" (UID: \"bd43162d-92a7-42ed-8615-ce99aaf16067\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789124 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cqmvt\" (UniqueName: \"kubernetes.io/projected/ba64c46a-5bbe-470e-8dcd-560c5f1ddf59-kube-api-access-cqmvt\") pod \"dns-operator-799b87ffcd-rgqgl\" (UID: \"ba64c46a-5bbe-470e-8dcd-560c5f1ddf59\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789156 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b182bd55-8225-4386-aa02-40b8c9358df5-ca-trust-extracted\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789183 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nlfv\" (UniqueName: \"kubernetes.io/projected/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-kube-api-access-8nlfv\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789207 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789230 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f-auth-proxy-config\") pod \"machine-approver-54c688565-mgr24\" (UID: \"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789311 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789339 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789358 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789388 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f-machine-approver-tls\") pod \"machine-approver-54c688565-mgr24\" (UID: \"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789403 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-audit-policies\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789434 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-bound-sa-token\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789453 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789472 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bd43162d-92a7-42ed-8615-ce99aaf16067-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-bxwd2\" (UID: \"bd43162d-92a7-42ed-8615-ce99aaf16067\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.789605 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f-config\") pod \"machine-approver-54c688565-mgr24\" (UID: \"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.791603 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f-auth-proxy-config\") pod \"machine-approver-54c688565-mgr24\" (UID: \"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.791811 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-audit-dir\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.791879 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-audit-policies\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.792227 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bd43162d-92a7-42ed-8615-ce99aaf16067-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-bxwd2\" (UID: \"bd43162d-92a7-42ed-8615-ce99aaf16067\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.792363 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd43162d-92a7-42ed-8615-ce99aaf16067-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-bxwd2\" (UID: \"bd43162d-92a7-42ed-8615-ce99aaf16067\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.792411 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ba64c46a-5bbe-470e-8dcd-560c5f1ddf59-tmp-dir\") pod \"dns-operator-799b87ffcd-rgqgl\" (UID: \"ba64c46a-5bbe-470e-8dcd-560c5f1ddf59\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.792487 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/61c87129-51d7-446d-ac4a-d0f7c4e7a3f5-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-j478l\" (UID: \"61c87129-51d7-446d-ac4a-d0f7c4e7a3f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.792494 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.792760 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.792919 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ba64c46a-5bbe-470e-8dcd-560c5f1ddf59-tmp-dir\") pod \"dns-operator-799b87ffcd-rgqgl\" (UID: \"ba64c46a-5bbe-470e-8dcd-560c5f1ddf59\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.793052 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.793131 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zz48z\" (UniqueName: \"kubernetes.io/projected/61c87129-51d7-446d-ac4a-d0f7c4e7a3f5-kube-api-access-zz48z\") pod \"cluster-samples-operator-6b564684c8-j478l\" (UID: \"61c87129-51d7-446d-ac4a-d0f7c4e7a3f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.793198 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.793259 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b182bd55-8225-4386-aa02-40b8c9358df5-registry-certificates\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.793616 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4vvq\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-kube-api-access-b4vvq\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.793734 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba64c46a-5bbe-470e-8dcd-560c5f1ddf59-metrics-tls\") pod \"dns-operator-799b87ffcd-rgqgl\" (UID: \"ba64c46a-5bbe-470e-8dcd-560c5f1ddf59\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.793790 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.793841 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.793886 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b182bd55-8225-4386-aa02-40b8c9358df5-installation-pull-secrets\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.793968 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9b4g5\" (UniqueName: \"kubernetes.io/projected/9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f-kube-api-access-9b4g5\") pod \"machine-approver-54c688565-mgr24\" (UID: \"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.794013 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.794097 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.794175 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd43162d-92a7-42ed-8615-ce99aaf16067-config\") pod \"openshift-kube-scheduler-operator-54f497555d-bxwd2\" (UID: \"bd43162d-92a7-42ed-8615-ce99aaf16067\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.794987 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd43162d-92a7-42ed-8615-ce99aaf16067-config\") pod \"openshift-kube-scheduler-operator-54f497555d-bxwd2\" (UID: \"bd43162d-92a7-42ed-8615-ce99aaf16067\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.795003 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.795004 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f-machine-approver-tls\") pod \"machine-approver-54c688565-mgr24\" (UID: \"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.795731 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.795768 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.797784 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: E0122 09:54:00.797948 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.29792472 +0000 UTC m=+113.741554987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.799135 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.799165 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.800400 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd43162d-92a7-42ed-8615-ce99aaf16067-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-bxwd2\" (UID: \"bd43162d-92a7-42ed-8615-ce99aaf16067\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.801766 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/61c87129-51d7-446d-ac4a-d0f7c4e7a3f5-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-j478l\" (UID: \"61c87129-51d7-446d-ac4a-d0f7c4e7a3f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.802306 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.802483 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.804922 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba64c46a-5bbe-470e-8dcd-560c5f1ddf59-metrics-tls\") pod \"dns-operator-799b87ffcd-rgqgl\" (UID: \"ba64c46a-5bbe-470e-8dcd-560c5f1ddf59\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.805065 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.805554 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.813101 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.833562 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.853444 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.881638 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.893492 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.895118 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:00 crc kubenswrapper[5101]: E0122 09:54:00.895317 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.395290929 +0000 UTC m=+113.838921196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.895407 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-registry-tls\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.895472 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b24dae6c-3ca8-4404-8587-69276f17daf6-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.895527 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-signing-key\") pod \"service-ca-74545575db-gl7dl\" (UID: \"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0\") " pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.895564 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtsj6\" (UniqueName: \"kubernetes.io/projected/901b7095-5e60-483e-996c-1d63888331ce-kube-api-access-vtsj6\") pod \"multus-admission-controller-69db94689b-z4tq2\" (UID: \"901b7095-5e60-483e-996c-1d63888331ce\") " pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.895591 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtrjw\" (UniqueName: \"kubernetes.io/projected/1d1245dc-9786-483a-a1b9-b187dafc3ab4-kube-api-access-qtrjw\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.895620 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-mountpoint-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.895702 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89zl8\" (UniqueName: \"kubernetes.io/projected/660347db-42cb-4f31-801d-97c3c3523f66-kube-api-access-89zl8\") pod \"olm-operator-5cdf44d969-79dz2\" (UID: \"660347db-42cb-4f31-801d-97c3c3523f66\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.895783 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-config-volume\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.895846 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/660347db-42cb-4f31-801d-97c3c3523f66-profile-collector-cert\") pod \"olm-operator-5cdf44d969-79dz2\" (UID: \"660347db-42cb-4f31-801d-97c3c3523f66\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.895986 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/31d304d5-f99c-4384-87d1-5ffffb5d2694-certs\") pod \"machine-config-server-xzmmw\" (UID: \"31d304d5-f99c-4384-87d1-5ffffb5d2694\") " pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.896047 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd3171cb-920d-48bd-9653-6cd577a560bd-serving-cert\") pod \"openshift-config-operator-5777786469-x59wv\" (UID: \"bd3171cb-920d-48bd-9653-6cd577a560bd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.896097 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43dfdef8-e150-4eba-b790-6c9a395fba76-tmp\") pod \"marketplace-operator-547dbd544d-ss5t9\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.896136 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a6a20a61-7a61-4f52-b57c-c289c661f268-ready\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.896759 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b24dae6c-3ca8-4404-8587-69276f17daf6-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.896800 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hkv\" (UniqueName: \"kubernetes.io/projected/b24dae6c-3ca8-4404-8587-69276f17daf6-kube-api-access-w4hkv\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.896825 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-tmpfs\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.896855 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78ef6db0-8118-4e91-8d42-f4d7d1f82d32-serving-cert\") pod \"service-ca-operator-5b9c976747-4lbz9\" (UID: \"78ef6db0-8118-4e91-8d42-f4d7d1f82d32\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.896879 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1bf878a0-4591-4ee2-96e9-db36fe28422d-service-ca\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.896911 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-registration-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897031 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1bf878a0-4591-4ee2-96e9-db36fe28422d-console-config\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897073 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rq87\" (UniqueName: \"kubernetes.io/projected/b36514f0-26f0-4728-ae25-65a5ba99d2fa-kube-api-access-6rq87\") pod \"ingress-canary-4q7cw\" (UID: \"b36514f0-26f0-4728-ae25-65a5ba99d2fa\") " pod="openshift-ingress-canary/ingress-canary-4q7cw" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897115 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8996\" (UniqueName: \"kubernetes.io/projected/bd3171cb-920d-48bd-9653-6cd577a560bd-kube-api-access-k8996\") pod \"openshift-config-operator-5777786469-x59wv\" (UID: \"bd3171cb-920d-48bd-9653-6cd577a560bd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897148 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-socket-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897171 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7k6z\" (UniqueName: \"kubernetes.io/projected/4484a02d-d1db-4408-806f-3116be160354-kube-api-access-z7k6z\") pod \"package-server-manager-77f986bd66-ldwwl\" (UID: \"4484a02d-d1db-4408-806f-3116be160354\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897214 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-config-volume\") pod \"collect-profiles-29484585-945sr\" (UID: \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897261 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b1c8d56-3eac-4ef1-9d84-786af0465c79-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-qm8xn\" (UID: \"3b1c8d56-3eac-4ef1-9d84-786af0465c79\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897540 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d1245dc-9786-483a-a1b9-b187dafc3ab4-etcd-service-ca\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897576 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a368ef1-f996-42c8-ae62-a06dcff3e625-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mwpd8\" (UID: \"4a368ef1-f996-42c8-ae62-a06dcff3e625\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897600 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3aad7f-c0d8-468f-838b-a3700c3e60b0-config\") pod \"kube-controller-manager-operator-69d5f845f8-96sjm\" (UID: \"ed3aad7f-c0d8-468f-838b-a3700c3e60b0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897660 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b182bd55-8225-4386-aa02-40b8c9358df5-registry-certificates\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897717 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64tqs\" (UniqueName: \"kubernetes.io/projected/a6a20a61-7a61-4f52-b57c-c289c661f268-kube-api-access-64tqs\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897764 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b4vvq\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-kube-api-access-b4vvq\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897787 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-tmp-dir\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897820 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kg2g\" (UniqueName: \"kubernetes.io/projected/31d304d5-f99c-4384-87d1-5ffffb5d2694-kube-api-access-8kg2g\") pod \"machine-config-server-xzmmw\" (UID: \"31d304d5-f99c-4384-87d1-5ffffb5d2694\") " pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.897847 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d07fefdf-c0b8-488e-94ec-b54954cfacce-tmpfs\") pod \"catalog-operator-75ff9f647d-kx9c8\" (UID: \"d07fefdf-c0b8-488e-94ec-b54954cfacce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898008 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-secret-volume\") pod \"collect-profiles-29484585-945sr\" (UID: \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898053 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b182bd55-8225-4386-aa02-40b8c9358df5-installation-pull-secrets\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898082 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/31d304d5-f99c-4384-87d1-5ffffb5d2694-node-bootstrap-token\") pod \"machine-config-server-xzmmw\" (UID: \"31d304d5-f99c-4384-87d1-5ffffb5d2694\") " pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898129 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d07fefdf-c0b8-488e-94ec-b54954cfacce-srv-cert\") pod \"catalog-operator-75ff9f647d-kx9c8\" (UID: \"d07fefdf-c0b8-488e-94ec-b54954cfacce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898163 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04938683-0667-47f5-8b0f-69dfb43c4c3a-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-8z65m\" (UID: \"04938683-0667-47f5-8b0f-69dfb43c4c3a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898188 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg8c7\" (UniqueName: \"kubernetes.io/projected/3b1c8d56-3eac-4ef1-9d84-786af0465c79-kube-api-access-wg8c7\") pod \"openshift-controller-manager-operator-686468bdd5-qm8xn\" (UID: \"3b1c8d56-3eac-4ef1-9d84-786af0465c79\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898216 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed3aad7f-c0d8-468f-838b-a3700c3e60b0-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-96sjm\" (UID: \"ed3aad7f-c0d8-468f-838b-a3700c3e60b0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898263 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krwjj\" (UniqueName: \"kubernetes.io/projected/04938683-0667-47f5-8b0f-69dfb43c4c3a-kube-api-access-krwjj\") pod \"ingress-operator-6b9cb4dbcf-8z65m\" (UID: \"04938683-0667-47f5-8b0f-69dfb43c4c3a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898292 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/660347db-42cb-4f31-801d-97c3c3523f66-srv-cert\") pod \"olm-operator-5cdf44d969-79dz2\" (UID: \"660347db-42cb-4f31-801d-97c3c3523f66\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898318 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-signing-cabundle\") pod \"service-ca-74545575db-gl7dl\" (UID: \"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0\") " pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898342 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ed3aad7f-c0d8-468f-838b-a3700c3e60b0-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-96sjm\" (UID: \"ed3aad7f-c0d8-468f-838b-a3700c3e60b0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898372 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-service-ca-bundle\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898396 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h789\" (UniqueName: \"kubernetes.io/projected/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-kube-api-access-9h789\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898465 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-webhook-cert\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898497 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a6a20a61-7a61-4f52-b57c-c289c661f268-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898550 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b1c8d56-3eac-4ef1-9d84-786af0465c79-config\") pod \"openshift-controller-manager-operator-686468bdd5-qm8xn\" (UID: \"3b1c8d56-3eac-4ef1-9d84-786af0465c79\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898575 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddbd0830-d2a2-4f8a-84b4-74041a59ee10-serving-cert\") pod \"kube-apiserver-operator-575994946d-8frxr\" (UID: \"ddbd0830-d2a2-4f8a-84b4-74041a59ee10\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898619 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-metrics-certs\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898657 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blqrh\" (UniqueName: \"kubernetes.io/projected/78ef6db0-8118-4e91-8d42-f4d7d1f82d32-kube-api-access-blqrh\") pod \"service-ca-operator-5b9c976747-4lbz9\" (UID: \"78ef6db0-8118-4e91-8d42-f4d7d1f82d32\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898829 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf878a0-4591-4ee2-96e9-db36fe28422d-trusted-ca-bundle\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898864 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1bf878a0-4591-4ee2-96e9-db36fe28422d-oauth-serving-cert\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898894 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/43dfdef8-e150-4eba-b790-6c9a395fba76-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-ss5t9\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.898994 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b182bd55-8225-4386-aa02-40b8c9358df5-registry-certificates\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899000 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qrzq\" (UniqueName: \"kubernetes.io/projected/98b4c472-57be-436c-a925-427f5bc72fca-kube-api-access-7qrzq\") pod \"machine-config-operator-67c9d58cbb-h6k4m\" (UID: \"98b4c472-57be-436c-a925-427f5bc72fca\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899105 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b182bd55-8225-4386-aa02-40b8c9358df5-trusted-ca\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899132 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd3171cb-920d-48bd-9653-6cd577a560bd-available-featuregates\") pod \"openshift-config-operator-5777786469-x59wv\" (UID: \"bd3171cb-920d-48bd-9653-6cd577a560bd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899151 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhvbt\" (UniqueName: \"kubernetes.io/projected/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-kube-api-access-vhvbt\") pod \"service-ca-74545575db-gl7dl\" (UID: \"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0\") " pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899171 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddbd0830-d2a2-4f8a-84b4-74041a59ee10-config\") pod \"kube-apiserver-operator-575994946d-8frxr\" (UID: \"ddbd0830-d2a2-4f8a-84b4-74041a59ee10\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899217 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b182bd55-8225-4386-aa02-40b8c9358df5-ca-trust-extracted\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899236 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nxt4\" (UniqueName: \"kubernetes.io/projected/8c20fd39-64f0-40d4-9c12-915763fddfde-kube-api-access-8nxt4\") pod \"kube-storage-version-migrator-operator-565b79b866-lgbgn\" (UID: \"8c20fd39-64f0-40d4-9c12-915763fddfde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899298 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b24dae6c-3ca8-4404-8587-69276f17daf6-tmp\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899324 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a06e89cc-4b31-4452-95da-bcb17c66f029-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-jwllh\" (UID: \"a06e89cc-4b31-4452-95da-bcb17c66f029\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899345 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/98b4c472-57be-436c-a925-427f5bc72fca-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-h6k4m\" (UID: \"98b4c472-57be-436c-a925-427f5bc72fca\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899549 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d1245dc-9786-483a-a1b9-b187dafc3ab4-etcd-ca\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899673 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1d1245dc-9786-483a-a1b9-b187dafc3ab4-tmp-dir\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899793 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-plugins-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899887 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-bound-sa-token\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899932 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b24dae6c-3ca8-4404-8587-69276f17daf6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899960 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-stats-auth\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.899982 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-metrics-tls\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900057 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/901b7095-5e60-483e-996c-1d63888331ce-webhook-certs\") pod \"multus-admission-controller-69db94689b-z4tq2\" (UID: \"901b7095-5e60-483e-996c-1d63888331ce\") " pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900086 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trhg6\" (UniqueName: \"kubernetes.io/projected/4a368ef1-f996-42c8-ae62-a06dcff3e625-kube-api-access-trhg6\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mwpd8\" (UID: \"4a368ef1-f996-42c8-ae62-a06dcff3e625\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900117 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/98b4c472-57be-436c-a925-427f5bc72fca-images\") pod \"machine-config-operator-67c9d58cbb-h6k4m\" (UID: \"98b4c472-57be-436c-a925-427f5bc72fca\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900235 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a06e89cc-4b31-4452-95da-bcb17c66f029-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-jwllh\" (UID: \"a06e89cc-4b31-4452-95da-bcb17c66f029\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900288 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04938683-0667-47f5-8b0f-69dfb43c4c3a-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-8z65m\" (UID: \"04938683-0667-47f5-8b0f-69dfb43c4c3a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900320 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52tql\" (UniqueName: \"kubernetes.io/projected/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-kube-api-access-52tql\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900359 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7bw5\" (UniqueName: \"kubernetes.io/projected/d07fefdf-c0b8-488e-94ec-b54954cfacce-kube-api-access-h7bw5\") pod \"catalog-operator-75ff9f647d-kx9c8\" (UID: \"d07fefdf-c0b8-488e-94ec-b54954cfacce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900381 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1245dc-9786-483a-a1b9-b187dafc3ab4-config\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900430 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c20fd39-64f0-40d4-9c12-915763fddfde-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-lgbgn\" (UID: \"8c20fd39-64f0-40d4-9c12-915763fddfde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900456 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b1c8d56-3eac-4ef1-9d84-786af0465c79-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-qm8xn\" (UID: \"3b1c8d56-3eac-4ef1-9d84-786af0465c79\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900485 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6lpd\" (UniqueName: \"kubernetes.io/projected/1bf878a0-4591-4ee2-96e9-db36fe28422d-kube-api-access-t6lpd\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900502 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8lht\" (UniqueName: \"kubernetes.io/projected/0c550c98-0e20-4316-8338-5268b336f2a2-kube-api-access-z8lht\") pod \"migrator-866fcbc849-45b99\" (UID: \"0c550c98-0e20-4316-8338-5268b336f2a2\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-45b99" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900511 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b182bd55-8225-4386-aa02-40b8c9358df5-ca-trust-extracted\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900539 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43dfdef8-e150-4eba-b790-6c9a395fba76-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-ss5t9\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900561 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-registry-tls\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900573 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkqzb\" (UniqueName: \"kubernetes.io/projected/932ff910-1ca7-4354-a306-1ce5f15f4f92-kube-api-access-jkqzb\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900614 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp2l2\" (UniqueName: \"kubernetes.io/projected/a06e89cc-4b31-4452-95da-bcb17c66f029-kube-api-access-bp2l2\") pod \"machine-config-controller-f9cdd68f7-jwllh\" (UID: \"a06e89cc-4b31-4452-95da-bcb17c66f029\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900643 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b36514f0-26f0-4728-ae25-65a5ba99d2fa-cert\") pod \"ingress-canary-4q7cw\" (UID: \"b36514f0-26f0-4728-ae25-65a5ba99d2fa\") " pod="openshift-ingress-canary/ingress-canary-4q7cw" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900666 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh4lw\" (UniqueName: \"kubernetes.io/projected/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-kube-api-access-fh4lw\") pod \"collect-profiles-29484585-945sr\" (UID: \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900722 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d1245dc-9786-483a-a1b9-b187dafc3ab4-etcd-client\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900753 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900781 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-apiservice-cert\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900806 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddbd0830-d2a2-4f8a-84b4-74041a59ee10-kube-api-access\") pod \"kube-apiserver-operator-575994946d-8frxr\" (UID: \"ddbd0830-d2a2-4f8a-84b4-74041a59ee10\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900832 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-default-certificate\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900857 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2svnz\" (UniqueName: \"kubernetes.io/projected/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-kube-api-access-2svnz\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900880 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ddbd0830-d2a2-4f8a-84b4-74041a59ee10-tmp-dir\") pod \"kube-apiserver-operator-575994946d-8frxr\" (UID: \"ddbd0830-d2a2-4f8a-84b4-74041a59ee10\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900905 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/660347db-42cb-4f31-801d-97c3c3523f66-tmpfs\") pod \"olm-operator-5cdf44d969-79dz2\" (UID: \"660347db-42cb-4f31-801d-97c3c3523f66\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900931 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1245dc-9786-483a-a1b9-b187dafc3ab4-serving-cert\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900957 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/98b4c472-57be-436c-a925-427f5bc72fca-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-h6k4m\" (UID: \"98b4c472-57be-436c-a925-427f5bc72fca\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.900978 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1bf878a0-4591-4ee2-96e9-db36fe28422d-console-oauth-config\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.901003 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d07fefdf-c0b8-488e-94ec-b54954cfacce-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-kx9c8\" (UID: \"d07fefdf-c0b8-488e-94ec-b54954cfacce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:00 crc kubenswrapper[5101]: E0122 09:54:00.901025 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.401010909 +0000 UTC m=+113.844641176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.901069 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c20fd39-64f0-40d4-9c12-915763fddfde-config\") pod \"kube-storage-version-migrator-operator-565b79b866-lgbgn\" (UID: \"8c20fd39-64f0-40d4-9c12-915763fddfde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.901095 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-csi-data-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.901131 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6a20a61-7a61-4f52-b57c-c289c661f268-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.901165 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/b24dae6c-3ca8-4404-8587-69276f17daf6-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.901187 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf878a0-4591-4ee2-96e9-db36fe28422d-console-serving-cert\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.901195 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b182bd55-8225-4386-aa02-40b8c9358df5-trusted-ca\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.901217 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g6pl\" (UniqueName: \"kubernetes.io/projected/43dfdef8-e150-4eba-b790-6c9a395fba76-kube-api-access-7g6pl\") pod \"marketplace-operator-547dbd544d-ss5t9\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.901242 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78ef6db0-8118-4e91-8d42-f4d7d1f82d32-config\") pod \"service-ca-operator-5b9c976747-4lbz9\" (UID: \"78ef6db0-8118-4e91-8d42-f4d7d1f82d32\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.901264 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3aad7f-c0d8-468f-838b-a3700c3e60b0-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-96sjm\" (UID: \"ed3aad7f-c0d8-468f-838b-a3700c3e60b0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.901295 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04938683-0667-47f5-8b0f-69dfb43c4c3a-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-8z65m\" (UID: \"04938683-0667-47f5-8b0f-69dfb43c4c3a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.902269 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4484a02d-d1db-4408-806f-3116be160354-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-ldwwl\" (UID: \"4484a02d-d1db-4408-806f-3116be160354\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.903291 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b182bd55-8225-4386-aa02-40b8c9358df5-installation-pull-secrets\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.913605 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.933529 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.953110 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.974686 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:54:00 crc kubenswrapper[5101]: I0122 09:54:00.994178 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003106 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.003279 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.503243715 +0000 UTC m=+113.946873992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003416 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-socket-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003481 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z7k6z\" (UniqueName: \"kubernetes.io/projected/4484a02d-d1db-4408-806f-3116be160354-kube-api-access-z7k6z\") pod \"package-server-manager-77f986bd66-ldwwl\" (UID: \"4484a02d-d1db-4408-806f-3116be160354\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003513 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-config-volume\") pod \"collect-profiles-29484585-945sr\" (UID: \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003536 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b1c8d56-3eac-4ef1-9d84-786af0465c79-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-qm8xn\" (UID: \"3b1c8d56-3eac-4ef1-9d84-786af0465c79\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003560 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d1245dc-9786-483a-a1b9-b187dafc3ab4-etcd-service-ca\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003587 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a368ef1-f996-42c8-ae62-a06dcff3e625-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mwpd8\" (UID: \"4a368ef1-f996-42c8-ae62-a06dcff3e625\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003614 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3aad7f-c0d8-468f-838b-a3700c3e60b0-config\") pod \"kube-controller-manager-operator-69d5f845f8-96sjm\" (UID: \"ed3aad7f-c0d8-468f-838b-a3700c3e60b0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003646 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-64tqs\" (UniqueName: \"kubernetes.io/projected/a6a20a61-7a61-4f52-b57c-c289c661f268-kube-api-access-64tqs\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003677 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-tmp-dir\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003703 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8kg2g\" (UniqueName: \"kubernetes.io/projected/31d304d5-f99c-4384-87d1-5ffffb5d2694-kube-api-access-8kg2g\") pod \"machine-config-server-xzmmw\" (UID: \"31d304d5-f99c-4384-87d1-5ffffb5d2694\") " pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003728 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d07fefdf-c0b8-488e-94ec-b54954cfacce-tmpfs\") pod \"catalog-operator-75ff9f647d-kx9c8\" (UID: \"d07fefdf-c0b8-488e-94ec-b54954cfacce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003735 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-socket-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003756 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-secret-volume\") pod \"collect-profiles-29484585-945sr\" (UID: \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003793 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/31d304d5-f99c-4384-87d1-5ffffb5d2694-node-bootstrap-token\") pod \"machine-config-server-xzmmw\" (UID: \"31d304d5-f99c-4384-87d1-5ffffb5d2694\") " pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003824 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d07fefdf-c0b8-488e-94ec-b54954cfacce-srv-cert\") pod \"catalog-operator-75ff9f647d-kx9c8\" (UID: \"d07fefdf-c0b8-488e-94ec-b54954cfacce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003849 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04938683-0667-47f5-8b0f-69dfb43c4c3a-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-8z65m\" (UID: \"04938683-0667-47f5-8b0f-69dfb43c4c3a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003877 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wg8c7\" (UniqueName: \"kubernetes.io/projected/3b1c8d56-3eac-4ef1-9d84-786af0465c79-kube-api-access-wg8c7\") pod \"openshift-controller-manager-operator-686468bdd5-qm8xn\" (UID: \"3b1c8d56-3eac-4ef1-9d84-786af0465c79\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003896 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed3aad7f-c0d8-468f-838b-a3700c3e60b0-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-96sjm\" (UID: \"ed3aad7f-c0d8-468f-838b-a3700c3e60b0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003924 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-krwjj\" (UniqueName: \"kubernetes.io/projected/04938683-0667-47f5-8b0f-69dfb43c4c3a-kube-api-access-krwjj\") pod \"ingress-operator-6b9cb4dbcf-8z65m\" (UID: \"04938683-0667-47f5-8b0f-69dfb43c4c3a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003948 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/660347db-42cb-4f31-801d-97c3c3523f66-srv-cert\") pod \"olm-operator-5cdf44d969-79dz2\" (UID: \"660347db-42cb-4f31-801d-97c3c3523f66\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003968 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-signing-cabundle\") pod \"service-ca-74545575db-gl7dl\" (UID: \"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0\") " pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.003986 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ed3aad7f-c0d8-468f-838b-a3700c3e60b0-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-96sjm\" (UID: \"ed3aad7f-c0d8-468f-838b-a3700c3e60b0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004004 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-service-ca-bundle\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004023 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9h789\" (UniqueName: \"kubernetes.io/projected/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-kube-api-access-9h789\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004042 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-webhook-cert\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004059 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a6a20a61-7a61-4f52-b57c-c289c661f268-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004079 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b1c8d56-3eac-4ef1-9d84-786af0465c79-config\") pod \"openshift-controller-manager-operator-686468bdd5-qm8xn\" (UID: \"3b1c8d56-3eac-4ef1-9d84-786af0465c79\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004100 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddbd0830-d2a2-4f8a-84b4-74041a59ee10-serving-cert\") pod \"kube-apiserver-operator-575994946d-8frxr\" (UID: \"ddbd0830-d2a2-4f8a-84b4-74041a59ee10\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004119 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-metrics-certs\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004136 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-blqrh\" (UniqueName: \"kubernetes.io/projected/78ef6db0-8118-4e91-8d42-f4d7d1f82d32-kube-api-access-blqrh\") pod \"service-ca-operator-5b9c976747-4lbz9\" (UID: \"78ef6db0-8118-4e91-8d42-f4d7d1f82d32\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004193 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf878a0-4591-4ee2-96e9-db36fe28422d-trusted-ca-bundle\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004211 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1bf878a0-4591-4ee2-96e9-db36fe28422d-oauth-serving-cert\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004219 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-tmp-dir\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004232 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/43dfdef8-e150-4eba-b790-6c9a395fba76-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-ss5t9\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004292 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d07fefdf-c0b8-488e-94ec-b54954cfacce-tmpfs\") pod \"catalog-operator-75ff9f647d-kx9c8\" (UID: \"d07fefdf-c0b8-488e-94ec-b54954cfacce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004666 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ed3aad7f-c0d8-468f-838b-a3700c3e60b0-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-96sjm\" (UID: \"ed3aad7f-c0d8-468f-838b-a3700c3e60b0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004716 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7qrzq\" (UniqueName: \"kubernetes.io/projected/98b4c472-57be-436c-a925-427f5bc72fca-kube-api-access-7qrzq\") pod \"machine-config-operator-67c9d58cbb-h6k4m\" (UID: \"98b4c472-57be-436c-a925-427f5bc72fca\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004785 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd3171cb-920d-48bd-9653-6cd577a560bd-available-featuregates\") pod \"openshift-config-operator-5777786469-x59wv\" (UID: \"bd3171cb-920d-48bd-9653-6cd577a560bd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004816 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vhvbt\" (UniqueName: \"kubernetes.io/projected/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-kube-api-access-vhvbt\") pod \"service-ca-74545575db-gl7dl\" (UID: \"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0\") " pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004842 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddbd0830-d2a2-4f8a-84b4-74041a59ee10-config\") pod \"kube-apiserver-operator-575994946d-8frxr\" (UID: \"ddbd0830-d2a2-4f8a-84b4-74041a59ee10\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004889 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nxt4\" (UniqueName: \"kubernetes.io/projected/8c20fd39-64f0-40d4-9c12-915763fddfde-kube-api-access-8nxt4\") pod \"kube-storage-version-migrator-operator-565b79b866-lgbgn\" (UID: \"8c20fd39-64f0-40d4-9c12-915763fddfde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004926 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b24dae6c-3ca8-4404-8587-69276f17daf6-tmp\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004961 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a06e89cc-4b31-4452-95da-bcb17c66f029-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-jwllh\" (UID: \"a06e89cc-4b31-4452-95da-bcb17c66f029\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.004988 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/98b4c472-57be-436c-a925-427f5bc72fca-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-h6k4m\" (UID: \"98b4c472-57be-436c-a925-427f5bc72fca\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005023 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d1245dc-9786-483a-a1b9-b187dafc3ab4-etcd-ca\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005055 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1d1245dc-9786-483a-a1b9-b187dafc3ab4-tmp-dir\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005080 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-plugins-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005114 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b24dae6c-3ca8-4404-8587-69276f17daf6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005142 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-stats-auth\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005164 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-metrics-tls\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005196 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/901b7095-5e60-483e-996c-1d63888331ce-webhook-certs\") pod \"multus-admission-controller-69db94689b-z4tq2\" (UID: \"901b7095-5e60-483e-996c-1d63888331ce\") " pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005209 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b1c8d56-3eac-4ef1-9d84-786af0465c79-config\") pod \"openshift-controller-manager-operator-686468bdd5-qm8xn\" (UID: \"3b1c8d56-3eac-4ef1-9d84-786af0465c79\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005221 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trhg6\" (UniqueName: \"kubernetes.io/projected/4a368ef1-f996-42c8-ae62-a06dcff3e625-kube-api-access-trhg6\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mwpd8\" (UID: \"4a368ef1-f996-42c8-ae62-a06dcff3e625\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005333 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/98b4c472-57be-436c-a925-427f5bc72fca-images\") pod \"machine-config-operator-67c9d58cbb-h6k4m\" (UID: \"98b4c472-57be-436c-a925-427f5bc72fca\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005373 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a06e89cc-4b31-4452-95da-bcb17c66f029-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-jwllh\" (UID: \"a06e89cc-4b31-4452-95da-bcb17c66f029\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005388 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd3171cb-920d-48bd-9653-6cd577a560bd-available-featuregates\") pod \"openshift-config-operator-5777786469-x59wv\" (UID: \"bd3171cb-920d-48bd-9653-6cd577a560bd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005402 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04938683-0667-47f5-8b0f-69dfb43c4c3a-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-8z65m\" (UID: \"04938683-0667-47f5-8b0f-69dfb43c4c3a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005453 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf878a0-4591-4ee2-96e9-db36fe28422d-trusted-ca-bundle\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005486 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-52tql\" (UniqueName: \"kubernetes.io/projected/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-kube-api-access-52tql\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005540 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h7bw5\" (UniqueName: \"kubernetes.io/projected/d07fefdf-c0b8-488e-94ec-b54954cfacce-kube-api-access-h7bw5\") pod \"catalog-operator-75ff9f647d-kx9c8\" (UID: \"d07fefdf-c0b8-488e-94ec-b54954cfacce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005580 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1245dc-9786-483a-a1b9-b187dafc3ab4-config\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005637 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c20fd39-64f0-40d4-9c12-915763fddfde-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-lgbgn\" (UID: \"8c20fd39-64f0-40d4-9c12-915763fddfde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005655 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b1c8d56-3eac-4ef1-9d84-786af0465c79-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-qm8xn\" (UID: \"3b1c8d56-3eac-4ef1-9d84-786af0465c79\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005695 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t6lpd\" (UniqueName: \"kubernetes.io/projected/1bf878a0-4591-4ee2-96e9-db36fe28422d-kube-api-access-t6lpd\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005714 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z8lht\" (UniqueName: \"kubernetes.io/projected/0c550c98-0e20-4316-8338-5268b336f2a2-kube-api-access-z8lht\") pod \"migrator-866fcbc849-45b99\" (UID: \"0c550c98-0e20-4316-8338-5268b336f2a2\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-45b99" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005730 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43dfdef8-e150-4eba-b790-6c9a395fba76-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-ss5t9\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005747 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jkqzb\" (UniqueName: \"kubernetes.io/projected/932ff910-1ca7-4354-a306-1ce5f15f4f92-kube-api-access-jkqzb\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005776 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bp2l2\" (UniqueName: \"kubernetes.io/projected/a06e89cc-4b31-4452-95da-bcb17c66f029-kube-api-access-bp2l2\") pod \"machine-config-controller-f9cdd68f7-jwllh\" (UID: \"a06e89cc-4b31-4452-95da-bcb17c66f029\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005794 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b36514f0-26f0-4728-ae25-65a5ba99d2fa-cert\") pod \"ingress-canary-4q7cw\" (UID: \"b36514f0-26f0-4728-ae25-65a5ba99d2fa\") " pod="openshift-ingress-canary/ingress-canary-4q7cw" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005813 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fh4lw\" (UniqueName: \"kubernetes.io/projected/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-kube-api-access-fh4lw\") pod \"collect-profiles-29484585-945sr\" (UID: \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005827 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1bf878a0-4591-4ee2-96e9-db36fe28422d-oauth-serving-cert\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005845 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d1245dc-9786-483a-a1b9-b187dafc3ab4-etcd-client\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005954 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005997 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-apiservice-cert\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006018 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddbd0830-d2a2-4f8a-84b4-74041a59ee10-kube-api-access\") pod \"kube-apiserver-operator-575994946d-8frxr\" (UID: \"ddbd0830-d2a2-4f8a-84b4-74041a59ee10\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006045 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-default-certificate\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006072 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2svnz\" (UniqueName: \"kubernetes.io/projected/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-kube-api-access-2svnz\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006091 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ddbd0830-d2a2-4f8a-84b4-74041a59ee10-tmp-dir\") pod \"kube-apiserver-operator-575994946d-8frxr\" (UID: \"ddbd0830-d2a2-4f8a-84b4-74041a59ee10\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006109 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/660347db-42cb-4f31-801d-97c3c3523f66-tmpfs\") pod \"olm-operator-5cdf44d969-79dz2\" (UID: \"660347db-42cb-4f31-801d-97c3c3523f66\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006120 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d1245dc-9786-483a-a1b9-b187dafc3ab4-etcd-ca\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006127 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b24dae6c-3ca8-4404-8587-69276f17daf6-tmp\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006144 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1245dc-9786-483a-a1b9-b187dafc3ab4-serving-cert\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006164 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/98b4c472-57be-436c-a925-427f5bc72fca-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-h6k4m\" (UID: \"98b4c472-57be-436c-a925-427f5bc72fca\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006270 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1bf878a0-4591-4ee2-96e9-db36fe28422d-console-oauth-config\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006298 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d07fefdf-c0b8-488e-94ec-b54954cfacce-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-kx9c8\" (UID: \"d07fefdf-c0b8-488e-94ec-b54954cfacce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006303 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/98b4c472-57be-436c-a925-427f5bc72fca-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-h6k4m\" (UID: \"98b4c472-57be-436c-a925-427f5bc72fca\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006316 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c20fd39-64f0-40d4-9c12-915763fddfde-config\") pod \"kube-storage-version-migrator-operator-565b79b866-lgbgn\" (UID: \"8c20fd39-64f0-40d4-9c12-915763fddfde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006334 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-csi-data-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006351 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6a20a61-7a61-4f52-b57c-c289c661f268-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006377 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/b24dae6c-3ca8-4404-8587-69276f17daf6-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.005954 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a06e89cc-4b31-4452-95da-bcb17c66f029-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-jwllh\" (UID: \"a06e89cc-4b31-4452-95da-bcb17c66f029\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006394 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf878a0-4591-4ee2-96e9-db36fe28422d-console-serving-cert\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006393 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b1c8d56-3eac-4ef1-9d84-786af0465c79-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-qm8xn\" (UID: \"3b1c8d56-3eac-4ef1-9d84-786af0465c79\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.006471 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.506457015 +0000 UTC m=+113.950087282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006506 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6pl\" (UniqueName: \"kubernetes.io/projected/43dfdef8-e150-4eba-b790-6c9a395fba76-kube-api-access-7g6pl\") pod \"marketplace-operator-547dbd544d-ss5t9\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006622 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78ef6db0-8118-4e91-8d42-f4d7d1f82d32-config\") pod \"service-ca-operator-5b9c976747-4lbz9\" (UID: \"78ef6db0-8118-4e91-8d42-f4d7d1f82d32\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006659 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3aad7f-c0d8-468f-838b-a3700c3e60b0-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-96sjm\" (UID: \"ed3aad7f-c0d8-468f-838b-a3700c3e60b0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006691 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04938683-0667-47f5-8b0f-69dfb43c4c3a-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-8z65m\" (UID: \"04938683-0667-47f5-8b0f-69dfb43c4c3a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006706 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1d1245dc-9786-483a-a1b9-b187dafc3ab4-tmp-dir\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006716 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6a20a61-7a61-4f52-b57c-c289c661f268-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006725 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4484a02d-d1db-4408-806f-3116be160354-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-ldwwl\" (UID: \"4484a02d-d1db-4408-806f-3116be160354\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006776 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b24dae6c-3ca8-4404-8587-69276f17daf6-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006700 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-csi-data-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006841 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/660347db-42cb-4f31-801d-97c3c3523f66-tmpfs\") pod \"olm-operator-5cdf44d969-79dz2\" (UID: \"660347db-42cb-4f31-801d-97c3c3523f66\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006862 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-signing-key\") pod \"service-ca-74545575db-gl7dl\" (UID: \"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0\") " pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006882 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-plugins-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006913 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vtsj6\" (UniqueName: \"kubernetes.io/projected/901b7095-5e60-483e-996c-1d63888331ce-kube-api-access-vtsj6\") pod \"multus-admission-controller-69db94689b-z4tq2\" (UID: \"901b7095-5e60-483e-996c-1d63888331ce\") " pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006945 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qtrjw\" (UniqueName: \"kubernetes.io/projected/1d1245dc-9786-483a-a1b9-b187dafc3ab4-kube-api-access-qtrjw\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006971 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-mountpoint-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.006999 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-89zl8\" (UniqueName: \"kubernetes.io/projected/660347db-42cb-4f31-801d-97c3c3523f66-kube-api-access-89zl8\") pod \"olm-operator-5cdf44d969-79dz2\" (UID: \"660347db-42cb-4f31-801d-97c3c3523f66\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007026 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-config-volume\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007039 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-mountpoint-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007056 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/660347db-42cb-4f31-801d-97c3c3523f66-profile-collector-cert\") pod \"olm-operator-5cdf44d969-79dz2\" (UID: \"660347db-42cb-4f31-801d-97c3c3523f66\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007062 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/b24dae6c-3ca8-4404-8587-69276f17daf6-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007126 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/31d304d5-f99c-4384-87d1-5ffffb5d2694-certs\") pod \"machine-config-server-xzmmw\" (UID: \"31d304d5-f99c-4384-87d1-5ffffb5d2694\") " pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007158 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd3171cb-920d-48bd-9653-6cd577a560bd-serving-cert\") pod \"openshift-config-operator-5777786469-x59wv\" (UID: \"bd3171cb-920d-48bd-9653-6cd577a560bd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007181 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43dfdef8-e150-4eba-b790-6c9a395fba76-tmp\") pod \"marketplace-operator-547dbd544d-ss5t9\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007203 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a6a20a61-7a61-4f52-b57c-c289c661f268-ready\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007287 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b24dae6c-3ca8-4404-8587-69276f17daf6-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007316 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4hkv\" (UniqueName: \"kubernetes.io/projected/b24dae6c-3ca8-4404-8587-69276f17daf6-kube-api-access-w4hkv\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007339 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-tmpfs\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007365 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78ef6db0-8118-4e91-8d42-f4d7d1f82d32-serving-cert\") pod \"service-ca-operator-5b9c976747-4lbz9\" (UID: \"78ef6db0-8118-4e91-8d42-f4d7d1f82d32\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007386 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1bf878a0-4591-4ee2-96e9-db36fe28422d-service-ca\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007401 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ddbd0830-d2a2-4f8a-84b4-74041a59ee10-tmp-dir\") pod \"kube-apiserver-operator-575994946d-8frxr\" (UID: \"ddbd0830-d2a2-4f8a-84b4-74041a59ee10\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007558 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-registration-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007608 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1bf878a0-4591-4ee2-96e9-db36fe28422d-console-config\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007634 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6rq87\" (UniqueName: \"kubernetes.io/projected/b36514f0-26f0-4728-ae25-65a5ba99d2fa-kube-api-access-6rq87\") pod \"ingress-canary-4q7cw\" (UID: \"b36514f0-26f0-4728-ae25-65a5ba99d2fa\") " pod="openshift-ingress-canary/ingress-canary-4q7cw" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007658 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43dfdef8-e150-4eba-b790-6c9a395fba76-tmp\") pod \"marketplace-operator-547dbd544d-ss5t9\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007663 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k8996\" (UniqueName: \"kubernetes.io/projected/bd3171cb-920d-48bd-9653-6cd577a560bd-kube-api-access-k8996\") pod \"openshift-config-operator-5777786469-x59wv\" (UID: \"bd3171cb-920d-48bd-9653-6cd577a560bd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.007857 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/932ff910-1ca7-4354-a306-1ce5f15f4f92-registration-dir\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.008003 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-tmpfs\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.008210 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b24dae6c-3ca8-4404-8587-69276f17daf6-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.008372 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a6a20a61-7a61-4f52-b57c-c289c661f268-ready\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.008559 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b1c8d56-3eac-4ef1-9d84-786af0465c79-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-qm8xn\" (UID: \"3b1c8d56-3eac-4ef1-9d84-786af0465c79\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.008611 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1bf878a0-4591-4ee2-96e9-db36fe28422d-service-ca\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.009288 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1bf878a0-4591-4ee2-96e9-db36fe28422d-console-config\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.009798 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b24dae6c-3ca8-4404-8587-69276f17daf6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.010002 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd3171cb-920d-48bd-9653-6cd577a560bd-serving-cert\") pod \"openshift-config-operator-5777786469-x59wv\" (UID: \"bd3171cb-920d-48bd-9653-6cd577a560bd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.010110 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4484a02d-d1db-4408-806f-3116be160354-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-ldwwl\" (UID: \"4484a02d-d1db-4408-806f-3116be160354\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.010960 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1bf878a0-4591-4ee2-96e9-db36fe28422d-console-oauth-config\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.011240 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf878a0-4591-4ee2-96e9-db36fe28422d-console-serving-cert\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.014267 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.032982 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.036850 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1245dc-9786-483a-a1b9-b187dafc3ab4-config\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.053969 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.073384 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.080909 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1245dc-9786-483a-a1b9-b187dafc3ab4-serving-cert\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.094638 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.104749 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d1245dc-9786-483a-a1b9-b187dafc3ab4-etcd-service-ca\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.109048 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.109319 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.609295507 +0000 UTC m=+114.052925774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.109693 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.110151 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.61013344 +0000 UTC m=+114.053763717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.113750 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.119923 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d1245dc-9786-483a-a1b9-b187dafc3ab4-etcd-client\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.134518 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.139123 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/660347db-42cb-4f31-801d-97c3c3523f66-srv-cert\") pod \"olm-operator-5cdf44d969-79dz2\" (UID: \"660347db-42cb-4f31-801d-97c3c3523f66\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.155697 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.160452 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d07fefdf-c0b8-488e-94ec-b54954cfacce-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-kx9c8\" (UID: \"d07fefdf-c0b8-488e-94ec-b54954cfacce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.161146 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/660347db-42cb-4f31-801d-97c3c3523f66-profile-collector-cert\") pod \"olm-operator-5cdf44d969-79dz2\" (UID: \"660347db-42cb-4f31-801d-97c3c3523f66\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.168268 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-secret-volume\") pod \"collect-profiles-29484585-945sr\" (UID: \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.173642 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.176574 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddbd0830-d2a2-4f8a-84b4-74041a59ee10-config\") pod \"kube-apiserver-operator-575994946d-8frxr\" (UID: \"ddbd0830-d2a2-4f8a-84b4-74041a59ee10\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.194360 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.198473 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddbd0830-d2a2-4f8a-84b4-74041a59ee10-serving-cert\") pod \"kube-apiserver-operator-575994946d-8frxr\" (UID: \"ddbd0830-d2a2-4f8a-84b4-74041a59ee10\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.210749 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.211094 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.7110724 +0000 UTC m=+114.154702667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.211486 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.211931 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.711909483 +0000 UTC m=+114.155539780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.214099 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.233695 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.254149 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.274247 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.294525 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.312540 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.312810 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.81276753 +0000 UTC m=+114.256397797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.313568 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.314004 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.314063 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.814046746 +0000 UTC m=+114.257677013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.334160 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.344236 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78ef6db0-8118-4e91-8d42-f4d7d1f82d32-serving-cert\") pod \"service-ca-operator-5b9c976747-4lbz9\" (UID: \"78ef6db0-8118-4e91-8d42-f4d7d1f82d32\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.354833 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.374701 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.378258 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78ef6db0-8118-4e91-8d42-f4d7d1f82d32-config\") pod \"service-ca-operator-5b9c976747-4lbz9\" (UID: \"78ef6db0-8118-4e91-8d42-f4d7d1f82d32\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.394462 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.413479 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.415264 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.915229012 +0000 UTC m=+114.358859279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.415595 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.416994 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.417725 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:01.917691301 +0000 UTC m=+114.361321728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.419253 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04938683-0667-47f5-8b0f-69dfb43c4c3a-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-8z65m\" (UID: \"04938683-0667-47f5-8b0f-69dfb43c4c3a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.443029 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.449609 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04938683-0667-47f5-8b0f-69dfb43c4c3a-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-8z65m\" (UID: \"04938683-0667-47f5-8b0f-69dfb43c4c3a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.453720 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.473647 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.494127 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.513874 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.517701 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.517903 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.017876229 +0000 UTC m=+114.461506496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.518557 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.518908 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.018896918 +0000 UTC m=+114.462527185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.519509 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/98b4c472-57be-436c-a925-427f5bc72fca-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-h6k4m\" (UID: \"98b4c472-57be-436c-a925-427f5bc72fca\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.527826 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.527860 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.534698 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.554435 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.557249 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/98b4c472-57be-436c-a925-427f5bc72fca-images\") pod \"machine-config-operator-67c9d58cbb-h6k4m\" (UID: \"98b4c472-57be-436c-a925-427f5bc72fca\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.573966 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.577241 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d07fefdf-c0b8-488e-94ec-b54954cfacce-srv-cert\") pod \"catalog-operator-75ff9f647d-kx9c8\" (UID: \"d07fefdf-c0b8-488e-94ec-b54954cfacce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.594571 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.614357 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.619677 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.619997 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.11992106 +0000 UTC m=+114.563551327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.635144 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.650738 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/43dfdef8-e150-4eba-b790-6c9a395fba76-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-ss5t9\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.662108 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.668206 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43dfdef8-e150-4eba-b790-6c9a395fba76-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-ss5t9\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.672392 5101 request.go:752] "Waited before sending request" delay="1.014463326s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.674399 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.694683 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.715858 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.721111 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3aad7f-c0d8-468f-838b-a3700c3e60b0-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-96sjm\" (UID: \"ed3aad7f-c0d8-468f-838b-a3700c3e60b0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.722603 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.723091 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.223071011 +0000 UTC m=+114.666701278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.734039 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.754429 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.765177 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3aad7f-c0d8-468f-838b-a3700c3e60b0-config\") pod \"kube-controller-manager-operator-69d5f845f8-96sjm\" (UID: \"ed3aad7f-c0d8-468f-838b-a3700c3e60b0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.775084 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.794386 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.804807 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-config-volume\") pod \"collect-profiles-29484585-945sr\" (UID: \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.824072 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.824273 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.324243347 +0000 UTC m=+114.767873614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.825015 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.825331 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.325313317 +0000 UTC m=+114.768943584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.850954 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8dcs\" (UniqueName: \"kubernetes.io/projected/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-kube-api-access-f8dcs\") pod \"route-controller-manager-776cdc94d6-flq7f\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.870254 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4w7n\" (UniqueName: \"kubernetes.io/projected/8e9fa7a6-9771-4006-a4fb-2ab86f9dd802-kube-api-access-t4w7n\") pod \"authentication-operator-7f5c659b84-s2rsq\" (UID: \"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.890238 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv5j6\" (UniqueName: \"kubernetes.io/projected/40ddbf39-c363-4a9d-90d2-911b700eb8d1-kube-api-access-nv5j6\") pod \"machine-api-operator-755bb95488-66wpn\" (UID: \"40ddbf39-c363-4a9d-90d2-911b700eb8d1\") " pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.909881 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrpsr\" (UniqueName: \"kubernetes.io/projected/1c81934b-984b-4537-b93e-ecec345fdf73-kube-api-access-xrpsr\") pod \"apiserver-8596bd845d-bbf9g\" (UID: \"1c81934b-984b-4537-b93e-ecec345fdf73\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.926526 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.926704 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.426675138 +0000 UTC m=+114.870305405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.927350 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:01 crc kubenswrapper[5101]: E0122 09:54:01.927896 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.427869772 +0000 UTC m=+114.871500039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.930549 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k57gx\" (UniqueName: \"kubernetes.io/projected/ada11655-156b-4b1e-ad19-8391c89c8e6b-kube-api-access-k57gx\") pod \"downloads-747b44746d-w2759\" (UID: \"ada11655-156b-4b1e-ad19-8391c89c8e6b\") " pod="openshift-console/downloads-747b44746d-w2759" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.937987 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.945906 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.948583 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf9h5\" (UniqueName: \"kubernetes.io/projected/112f7c63-b876-4377-8418-18d8abc92100-kube-api-access-sf9h5\") pod \"apiserver-9ddfb9f55-w5j22\" (UID: \"112f7c63-b876-4377-8418-18d8abc92100\") " pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.969733 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbh5z\" (UniqueName: \"kubernetes.io/projected/79b5eb1b-bf45-47ce-992d-4c1bae056fc5-kube-api-access-jbh5z\") pod \"console-operator-67c89758df-8hrzs\" (UID: \"79b5eb1b-bf45-47ce-992d-4c1bae056fc5\") " pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.971281 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" Jan 22 09:54:01 crc kubenswrapper[5101]: I0122 09:54:01.989808 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8t64\" (UniqueName: \"kubernetes.io/projected/223b7c4c-942e-44bd-bf88-67db1adfed29-kube-api-access-v8t64\") pod \"openshift-apiserver-operator-846cbfc458-k4jfz\" (UID: \"223b7c4c-942e-44bd-bf88-67db1adfed29\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.005072 5101 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.005222 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-signing-cabundle podName:d93b1df8-3fed-437c-a7ff-7fea2a61fcb0 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.505188181 +0000 UTC m=+114.948818448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-signing-cabundle") pod "service-ca-74545575db-gl7dl" (UID: "d93b1df8-3fed-437c-a7ff-7fea2a61fcb0") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.006764 5101 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.006817 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-stats-auth podName:027bf0e3-cc9b-4a15-85ca-75cdb81a7a63 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.506807926 +0000 UTC m=+114.950438193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-stats-auth") pod "router-default-68cf44c8b8-jrw7k" (UID: "027bf0e3-cc9b-4a15-85ca-75cdb81a7a63") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.007924 5101 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.007973 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d304d5-f99c-4384-87d1-5ffffb5d2694-certs podName:31d304d5-f99c-4384-87d1-5ffffb5d2694 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.507964159 +0000 UTC m=+114.951594416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/31d304d5-f99c-4384-87d1-5ffffb5d2694-certs") pod "machine-config-server-xzmmw" (UID: "31d304d5-f99c-4384-87d1-5ffffb5d2694") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.029400 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.030480 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.530451147 +0000 UTC m=+114.974081414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.039476 5101 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.040445 5101 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.040645 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-service-ca-bundle podName:027bf0e3-cc9b-4a15-85ca-75cdb81a7a63 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.540613621 +0000 UTC m=+114.984243888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-service-ca-bundle") pod "router-default-68cf44c8b8-jrw7k" (UID: "027bf0e3-cc9b-4a15-85ca-75cdb81a7a63") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.040756 5101 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.040886 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-webhook-cert podName:e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.540869008 +0000 UTC m=+114.984499295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-webhook-cert") pod "packageserver-7d4fc7d867-kxcn8" (UID: "e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.041057 5101 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.041184 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-metrics-tls podName:4bcaae32-6fca-4120-8ca7-d9f5f709cb4c nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.541171036 +0000 UTC m=+114.984801303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-metrics-tls") pod "dns-default-4lvr8" (UID: "4bcaae32-6fca-4120-8ca7-d9f5f709cb4c") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.041293 5101 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.041399 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d304d5-f99c-4384-87d1-5ffffb5d2694-node-bootstrap-token podName:31d304d5-f99c-4384-87d1-5ffffb5d2694 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.541385852 +0000 UTC m=+114.985016119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/31d304d5-f99c-4384-87d1-5ffffb5d2694-node-bootstrap-token") pod "machine-config-server-xzmmw" (UID: "31d304d5-f99c-4384-87d1-5ffffb5d2694") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.041530 5101 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.041661 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-default-certificate podName:027bf0e3-cc9b-4a15-85ca-75cdb81a7a63 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.54165046 +0000 UTC m=+114.985280727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-default-certificate") pod "router-default-68cf44c8b8-jrw7k" (UID: "027bf0e3-cc9b-4a15-85ca-75cdb81a7a63") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.041767 5101 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.041870 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a6a20a61-7a61-4f52-b57c-c289c661f268-cni-sysctl-allowlist podName:a6a20a61-7a61-4f52-b57c-c289c661f268 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.541860746 +0000 UTC m=+114.985491013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/a6a20a61-7a61-4f52-b57c-c289c661f268-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-l6rf4" (UID: "a6a20a61-7a61-4f52-b57c-c289c661f268") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.041969 5101 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.042068 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c20fd39-64f0-40d4-9c12-915763fddfde-config podName:8c20fd39-64f0-40d4-9c12-915763fddfde nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.542059071 +0000 UTC m=+114.985689338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8c20fd39-64f0-40d4-9c12-915763fddfde-config") pod "kube-storage-version-migrator-operator-565b79b866-lgbgn" (UID: "8c20fd39-64f0-40d4-9c12-915763fddfde") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.042169 5101 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.042269 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-config-volume podName:4bcaae32-6fca-4120-8ca7-d9f5f709cb4c nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.542259237 +0000 UTC m=+114.985889504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-config-volume") pod "dns-default-4lvr8" (UID: "4bcaae32-6fca-4120-8ca7-d9f5f709cb4c") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.042360 5101 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.042480 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-apiservice-cert podName:e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.542468783 +0000 UTC m=+114.986099050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-apiservice-cert") pod "packageserver-7d4fc7d867-kxcn8" (UID: "e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.042598 5101 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.042715 5101 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.042782 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/901b7095-5e60-483e-996c-1d63888331ce-webhook-certs podName:901b7095-5e60-483e-996c-1d63888331ce nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.542674168 +0000 UTC m=+114.986304455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/901b7095-5e60-483e-996c-1d63888331ce-webhook-certs") pod "multus-admission-controller-69db94689b-z4tq2" (UID: "901b7095-5e60-483e-996c-1d63888331ce") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.042946 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b36514f0-26f0-4728-ae25-65a5ba99d2fa-cert podName:b36514f0-26f0-4728-ae25-65a5ba99d2fa nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.542925975 +0000 UTC m=+114.986556242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b36514f0-26f0-4728-ae25-65a5ba99d2fa-cert") pod "ingress-canary-4q7cw" (UID: "b36514f0-26f0-4728-ae25-65a5ba99d2fa") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.042972 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c20fd39-64f0-40d4-9c12-915763fddfde-serving-cert podName:8c20fd39-64f0-40d4-9c12-915763fddfde nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.542963176 +0000 UTC m=+114.986593443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8c20fd39-64f0-40d4-9c12-915763fddfde-serving-cert") pod "kube-storage-version-migrator-operator-565b79b866-lgbgn" (UID: "8c20fd39-64f0-40d4-9c12-915763fddfde") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.043307 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.044169 5101 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.044223 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a06e89cc-4b31-4452-95da-bcb17c66f029-proxy-tls podName:a06e89cc-4b31-4452-95da-bcb17c66f029 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.544214591 +0000 UTC m=+114.987844858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/a06e89cc-4b31-4452-95da-bcb17c66f029-proxy-tls") pod "machine-config-controller-f9cdd68f7-jwllh" (UID: "a06e89cc-4b31-4452-95da-bcb17c66f029") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.044244 5101 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.044267 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-metrics-certs podName:027bf0e3-cc9b-4a15-85ca-75cdb81a7a63 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.544261253 +0000 UTC m=+114.987891520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-metrics-certs") pod "router-default-68cf44c8b8-jrw7k" (UID: "027bf0e3-cc9b-4a15-85ca-75cdb81a7a63") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.044305 5101 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.044331 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a368ef1-f996-42c8-ae62-a06dcff3e625-control-plane-machine-set-operator-tls podName:4a368ef1-f996-42c8-ae62-a06dcff3e625 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.544324004 +0000 UTC m=+114.987954271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/4a368ef1-f996-42c8-ae62-a06dcff3e625-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-75ffdb6fcd-mwpd8" (UID: "4a368ef1-f996-42c8-ae62-a06dcff3e625") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.044397 5101 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.044436 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-signing-key podName:d93b1df8-3fed-437c-a7ff-7fea2a61fcb0 nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.544412957 +0000 UTC m=+114.988043224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-signing-key") pod "service-ca-74545575db-gl7dl" (UID: "d93b1df8-3fed-437c-a7ff-7fea2a61fcb0") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.055693 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.055983 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.056266 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.057479 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sp4f\" (UniqueName: \"kubernetes.io/projected/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-kube-api-access-6sp4f\") pod \"controller-manager-65b6cccf98-64f6k\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.073764 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.081441 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.092942 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.097469 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.106986 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-w2759" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.112987 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.148520 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.149260 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.649232584 +0000 UTC m=+115.092862851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.149568 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.156612 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.190542 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.228765 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.230032 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.231476 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.255987 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.256991 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.756971163 +0000 UTC m=+115.200601430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.359124 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.359702 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.859685382 +0000 UTC m=+115.303315659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.411749 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.412032 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.414731 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.415175 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.415510 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.415617 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.416281 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.416383 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.416525 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.416697 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.416479 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.434609 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.479461 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.480356 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:02.980331482 +0000 UTC m=+115.423961749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.534371 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.534755 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.540850 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.545013 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.545369 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.558942 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.575571 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.621697 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-webhook-cert\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.621737 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a6a20a61-7a61-4f52-b57c-c289c661f268-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.621766 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-metrics-certs\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.621844 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-stats-auth\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.621869 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-metrics-tls\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.621903 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/901b7095-5e60-483e-996c-1d63888331ce-webhook-certs\") pod \"multus-admission-controller-69db94689b-z4tq2\" (UID: \"901b7095-5e60-483e-996c-1d63888331ce\") " pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.621923 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a06e89cc-4b31-4452-95da-bcb17c66f029-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-jwllh\" (UID: \"a06e89cc-4b31-4452-95da-bcb17c66f029\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.621961 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c20fd39-64f0-40d4-9c12-915763fddfde-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-lgbgn\" (UID: \"8c20fd39-64f0-40d4-9c12-915763fddfde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.621981 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b36514f0-26f0-4728-ae25-65a5ba99d2fa-cert\") pod \"ingress-canary-4q7cw\" (UID: \"b36514f0-26f0-4728-ae25-65a5ba99d2fa\") " pod="openshift-ingress-canary/ingress-canary-4q7cw" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.622010 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.622030 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-apiservice-cert\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.622052 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-default-certificate\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.622080 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c20fd39-64f0-40d4-9c12-915763fddfde-config\") pod \"kube-storage-version-migrator-operator-565b79b866-lgbgn\" (UID: \"8c20fd39-64f0-40d4-9c12-915763fddfde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.622175 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-signing-key\") pod \"service-ca-74545575db-gl7dl\" (UID: \"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0\") " pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.622197 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-config-volume\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.622223 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/31d304d5-f99c-4384-87d1-5ffffb5d2694-certs\") pod \"machine-config-server-xzmmw\" (UID: \"31d304d5-f99c-4384-87d1-5ffffb5d2694\") " pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.622279 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a368ef1-f996-42c8-ae62-a06dcff3e625-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mwpd8\" (UID: \"4a368ef1-f996-42c8-ae62-a06dcff3e625\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.622315 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/31d304d5-f99c-4384-87d1-5ffffb5d2694-node-bootstrap-token\") pod \"machine-config-server-xzmmw\" (UID: \"31d304d5-f99c-4384-87d1-5ffffb5d2694\") " pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.622354 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-signing-cabundle\") pod \"service-ca-74545575db-gl7dl\" (UID: \"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0\") " pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.622372 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-service-ca-bundle\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.623089 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:03.123069239 +0000 UTC m=+115.566699546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.623265 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-config-volume\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.624654 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c20fd39-64f0-40d4-9c12-915763fddfde-config\") pod \"kube-storage-version-migrator-operator-565b79b866-lgbgn\" (UID: \"8c20fd39-64f0-40d4-9c12-915763fddfde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.640991 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-metrics-certs\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.641830 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-signing-cabundle\") pod \"service-ca-74545575db-gl7dl\" (UID: \"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0\") " pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.642303 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.643960 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.645660 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/901b7095-5e60-483e-996c-1d63888331ce-webhook-certs\") pod \"multus-admission-controller-69db94689b-z4tq2\" (UID: \"901b7095-5e60-483e-996c-1d63888331ce\") " pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.649456 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-stats-auth\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.623269 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-service-ca-bundle\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.650675 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-signing-key\") pod \"service-ca-74545575db-gl7dl\" (UID: \"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0\") " pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.651481 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a6a20a61-7a61-4f52-b57c-c289c661f268-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.653294 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a06e89cc-4b31-4452-95da-bcb17c66f029-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-jwllh\" (UID: \"a06e89cc-4b31-4452-95da-bcb17c66f029\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.656355 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.657053 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c20fd39-64f0-40d4-9c12-915763fddfde-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-lgbgn\" (UID: \"8c20fd39-64f0-40d4-9c12-915763fddfde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.659765 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.671906 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-metrics-tls\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.674080 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b36514f0-26f0-4728-ae25-65a5ba99d2fa-cert\") pod \"ingress-canary-4q7cw\" (UID: \"b36514f0-26f0-4728-ae25-65a5ba99d2fa\") " pod="openshift-ingress-canary/ingress-canary-4q7cw" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.674501 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.682011 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-apiservice-cert\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.684904 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-webhook-cert\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.685798 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a368ef1-f996-42c8-ae62-a06dcff3e625-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mwpd8\" (UID: \"4a368ef1-f996-42c8-ae62-a06dcff3e625\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.692583 5101 request.go:752] "Waited before sending request" delay="1.962803193s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.695847 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-default-certificate\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.696199 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.717637 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.724905 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.726275 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:03.2262423 +0000 UTC m=+115.669872567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.735679 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.756224 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.834324 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nlfv\" (UniqueName: \"kubernetes.io/projected/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-kube-api-access-8nlfv\") pod \"oauth-openshift-66458b6674-7gkpq\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.835845 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqmvt\" (UniqueName: \"kubernetes.io/projected/ba64c46a-5bbe-470e-8dcd-560c5f1ddf59-kube-api-access-cqmvt\") pod \"dns-operator-799b87ffcd-rgqgl\" (UID: \"ba64c46a-5bbe-470e-8dcd-560c5f1ddf59\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.839317 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.839863 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:03.339849064 +0000 UTC m=+115.783479331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.840649 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/31d304d5-f99c-4384-87d1-5ffffb5d2694-node-bootstrap-token\") pod \"machine-config-server-xzmmw\" (UID: \"31d304d5-f99c-4384-87d1-5ffffb5d2694\") " pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.853156 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd43162d-92a7-42ed-8615-ce99aaf16067-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-bxwd2\" (UID: \"bd43162d-92a7-42ed-8615-ce99aaf16067\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.854618 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b4g5\" (UniqueName: \"kubernetes.io/projected/9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f-kube-api-access-9b4g5\") pod \"machine-approver-54c688565-mgr24\" (UID: \"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.857412 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/31d304d5-f99c-4384-87d1-5ffffb5d2694-certs\") pod \"machine-config-server-xzmmw\" (UID: \"31d304d5-f99c-4384-87d1-5ffffb5d2694\") " pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.886793 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz48z\" (UniqueName: \"kubernetes.io/projected/61c87129-51d7-446d-ac4a-d0f7c4e7a3f5-kube-api-access-zz48z\") pod \"cluster-samples-operator-6b564684c8-j478l\" (UID: \"61c87129-51d7-446d-ac4a-d0f7c4e7a3f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.914647 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4vvq\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-kube-api-access-b4vvq\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.921031 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-bound-sa-token\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.941442 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7k6z\" (UniqueName: \"kubernetes.io/projected/4484a02d-d1db-4408-806f-3116be160354-kube-api-access-z7k6z\") pod \"package-server-manager-77f986bd66-ldwwl\" (UID: \"4484a02d-d1db-4408-806f-3116be160354\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.941919 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:02 crc kubenswrapper[5101]: E0122 09:54:02.942652 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:03.442631384 +0000 UTC m=+115.886261651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.959859 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-64tqs\" (UniqueName: \"kubernetes.io/projected/a6a20a61-7a61-4f52-b57c-c289c661f268-kube-api-access-64tqs\") pod \"cni-sysctl-allowlist-ds-l6rf4\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.978341 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg8c7\" (UniqueName: \"kubernetes.io/projected/3b1c8d56-3eac-4ef1-9d84-786af0465c79-kube-api-access-wg8c7\") pod \"openshift-controller-manager-operator-686468bdd5-qm8xn\" (UID: \"3b1c8d56-3eac-4ef1-9d84-786af0465c79\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:02 crc kubenswrapper[5101]: I0122 09:54:02.992899 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed3aad7f-c0d8-468f-838b-a3700c3e60b0-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-96sjm\" (UID: \"ed3aad7f-c0d8-468f-838b-a3700c3e60b0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.018330 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.022498 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-krwjj\" (UniqueName: \"kubernetes.io/projected/04938683-0667-47f5-8b0f-69dfb43c4c3a-kube-api-access-krwjj\") pod \"ingress-operator-6b9cb4dbcf-8z65m\" (UID: \"04938683-0667-47f5-8b0f-69dfb43c4c3a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.031309 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.035263 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.035516 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.036241 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kg2g\" (UniqueName: \"kubernetes.io/projected/31d304d5-f99c-4384-87d1-5ffffb5d2694-kube-api-access-8kg2g\") pod \"machine-config-server-xzmmw\" (UID: \"31d304d5-f99c-4384-87d1-5ffffb5d2694\") " pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.050179 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.050580 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" Jan 22 09:54:03 crc kubenswrapper[5101]: E0122 09:54:03.050675 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:03.550655242 +0000 UTC m=+115.994285509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.062725 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h789\" (UniqueName: \"kubernetes.io/projected/e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e-kube-api-access-9h789\") pod \"packageserver-7d4fc7d867-kxcn8\" (UID: \"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.071517 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.084156 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qrzq\" (UniqueName: \"kubernetes.io/projected/98b4c472-57be-436c-a925-427f5bc72fca-kube-api-access-7qrzq\") pod \"machine-config-operator-67c9d58cbb-h6k4m\" (UID: \"98b4c472-57be-436c-a925-427f5bc72fca\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.084484 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.087810 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.105485 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xzmmw" Jan 22 09:54:03 crc kubenswrapper[5101]: W0122 09:54:03.107296 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e9fa7a6_9771_4006_a4fb_2ab86f9dd802.slice/crio-213f8c91733189d3c3e3c402ab895c50f5a271440d228de2e9e4359cb5b3522c WatchSource:0}: Error finding container 213f8c91733189d3c3e3c402ab895c50f5a271440d228de2e9e4359cb5b3522c: Status 404 returned error can't find the container with id 213f8c91733189d3c3e3c402ab895c50f5a271440d228de2e9e4359cb5b3522c Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.126936 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-blqrh\" (UniqueName: \"kubernetes.io/projected/78ef6db0-8118-4e91-8d42-f4d7d1f82d32-kube-api-access-blqrh\") pod \"service-ca-operator-5b9c976747-4lbz9\" (UID: \"78ef6db0-8118-4e91-8d42-f4d7d1f82d32\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.152397 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:03 crc kubenswrapper[5101]: E0122 09:54:03.152968 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:03.652947119 +0000 UTC m=+116.096577386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.162318 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.163657 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.187150 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.189677 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhvbt\" (UniqueName: \"kubernetes.io/projected/d93b1df8-3fed-437c-a7ff-7fea2a61fcb0-kube-api-access-vhvbt\") pod \"service-ca-74545575db-gl7dl\" (UID: \"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0\") " pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.198066 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nxt4\" (UniqueName: \"kubernetes.io/projected/8c20fd39-64f0-40d4-9c12-915763fddfde-kube-api-access-8nxt4\") pod \"kube-storage-version-migrator-operator-565b79b866-lgbgn\" (UID: \"8c20fd39-64f0-40d4-9c12-915763fddfde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.198219 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trhg6\" (UniqueName: \"kubernetes.io/projected/4a368ef1-f996-42c8-ae62-a06dcff3e625-kube-api-access-trhg6\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mwpd8\" (UID: \"4a368ef1-f996-42c8-ae62-a06dcff3e625\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.199048 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-52tql\" (UniqueName: \"kubernetes.io/projected/4bcaae32-6fca-4120-8ca7-d9f5f709cb4c-kube-api-access-52tql\") pod \"dns-default-4lvr8\" (UID: \"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c\") " pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.199067 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7bw5\" (UniqueName: \"kubernetes.io/projected/d07fefdf-c0b8-488e-94ec-b54954cfacce-kube-api-access-h7bw5\") pod \"catalog-operator-75ff9f647d-kx9c8\" (UID: \"d07fefdf-c0b8-488e-94ec-b54954cfacce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.217202 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkqzb\" (UniqueName: \"kubernetes.io/projected/932ff910-1ca7-4354-a306-1ce5f15f4f92-kube-api-access-jkqzb\") pod \"csi-hostpathplugin-gf9jd\" (UID: \"932ff910-1ca7-4354-a306-1ce5f15f4f92\") " pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:03 crc kubenswrapper[5101]: W0122 09:54:03.222378 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31d304d5_f99c_4384_87d1_5ffffb5d2694.slice/crio-95408c2746ee8cc886600b78006bcad4212e51af264a9ca870ebeb331e1fd758 WatchSource:0}: Error finding container 95408c2746ee8cc886600b78006bcad4212e51af264a9ca870ebeb331e1fd758: Status 404 returned error can't find the container with id 95408c2746ee8cc886600b78006bcad4212e51af264a9ca870ebeb331e1fd758 Jan 22 09:54:03 crc kubenswrapper[5101]: W0122 09:54:03.223840 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a20a61_7a61_4f52_b57c_c289c661f268.slice/crio-2c10f229a080575068b84f9499107fbe4640f70e3123011ec6fd27b45ce47090 WatchSource:0}: Error finding container 2c10f229a080575068b84f9499107fbe4640f70e3123011ec6fd27b45ce47090: Status 404 returned error can't find the container with id 2c10f229a080575068b84f9499107fbe4640f70e3123011ec6fd27b45ce47090 Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.225098 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.231807 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp2l2\" (UniqueName: \"kubernetes.io/projected/a06e89cc-4b31-4452-95da-bcb17c66f029-kube-api-access-bp2l2\") pod \"machine-config-controller-f9cdd68f7-jwllh\" (UID: \"a06e89cc-4b31-4452-95da-bcb17c66f029\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.276013 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.276173 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.276844 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.277829 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:03 crc kubenswrapper[5101]: E0122 09:54:03.278384 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:03.778365852 +0000 UTC m=+116.221996129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.285534 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh4lw\" (UniqueName: \"kubernetes.io/projected/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-kube-api-access-fh4lw\") pod \"collect-profiles-29484585-945sr\" (UID: \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.289299 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-gl7dl" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.300126 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8lht\" (UniqueName: \"kubernetes.io/projected/0c550c98-0e20-4316-8338-5268b336f2a2-kube-api-access-z8lht\") pod \"migrator-866fcbc849-45b99\" (UID: \"0c550c98-0e20-4316-8338-5268b336f2a2\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-45b99" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.304550 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.304810 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.305983 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6lpd\" (UniqueName: \"kubernetes.io/projected/1bf878a0-4591-4ee2-96e9-db36fe28422d-kube-api-access-t6lpd\") pod \"console-64d44f6ddf-hwdqt\" (UID: \"1bf878a0-4591-4ee2-96e9-db36fe28422d\") " pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.312925 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddbd0830-d2a2-4f8a-84b4-74041a59ee10-kube-api-access\") pod \"kube-apiserver-operator-575994946d-8frxr\" (UID: \"ddbd0830-d2a2-4f8a-84b4-74041a59ee10\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.315964 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.338029 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.341439 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-8hrzs"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.341943 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2svnz\" (UniqueName: \"kubernetes.io/projected/027bf0e3-cc9b-4a15-85ca-75cdb81a7a63-kube-api-access-2svnz\") pod \"router-default-68cf44c8b8-jrw7k\" (UID: \"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63\") " pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.347853 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-64f6k"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.349900 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.357023 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.362058 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-45b99" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.367113 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g6pl\" (UniqueName: \"kubernetes.io/projected/43dfdef8-e150-4eba-b790-6c9a395fba76-kube-api-access-7g6pl\") pod \"marketplace-operator-547dbd544d-ss5t9\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.375613 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.375672 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04938683-0667-47f5-8b0f-69dfb43c4c3a-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-8z65m\" (UID: \"04938683-0667-47f5-8b0f-69dfb43c4c3a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.379155 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:03 crc kubenswrapper[5101]: E0122 09:54:03.379351 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:03.879317312 +0000 UTC m=+116.322947589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.380091 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:03 crc kubenswrapper[5101]: E0122 09:54:03.380557 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:03.880546266 +0000 UTC m=+116.324176533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.389610 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtsj6\" (UniqueName: \"kubernetes.io/projected/901b7095-5e60-483e-996c-1d63888331ce-kube-api-access-vtsj6\") pod \"multus-admission-controller-69db94689b-z4tq2\" (UID: \"901b7095-5e60-483e-996c-1d63888331ce\") " pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.431348 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" event={"ID":"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802","Type":"ContainerStarted","Data":"213f8c91733189d3c3e3c402ab895c50f5a271440d228de2e9e4359cb5b3522c"} Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.432531 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" event={"ID":"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f","Type":"ContainerStarted","Data":"a849c9335a9305f15ca6376ead12341ad4839c43445029f85f6e6b46d00117f2"} Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.433532 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" event={"ID":"a6a20a61-7a61-4f52-b57c-c289c661f268","Type":"ContainerStarted","Data":"2c10f229a080575068b84f9499107fbe4640f70e3123011ec6fd27b45ce47090"} Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.433574 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtrjw\" (UniqueName: \"kubernetes.io/projected/1d1245dc-9786-483a-a1b9-b187dafc3ab4-kube-api-access-qtrjw\") pod \"etcd-operator-69b85846b6-bc6vs\" (UID: \"1d1245dc-9786-483a-a1b9-b187dafc3ab4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.435571 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xzmmw" event={"ID":"31d304d5-f99c-4384-87d1-5ffffb5d2694","Type":"ContainerStarted","Data":"95408c2746ee8cc886600b78006bcad4212e51af264a9ca870ebeb331e1fd758"} Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.439227 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-89zl8\" (UniqueName: \"kubernetes.io/projected/660347db-42cb-4f31-801d-97c3c3523f66-kube-api-access-89zl8\") pod \"olm-operator-5cdf44d969-79dz2\" (UID: \"660347db-42cb-4f31-801d-97c3c3523f66\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.450761 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4hkv\" (UniqueName: \"kubernetes.io/projected/b24dae6c-3ca8-4404-8587-69276f17daf6-kube-api-access-w4hkv\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.453390 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.469594 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.473627 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b24dae6c-3ca8-4404-8587-69276f17daf6-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-k7lkq\" (UID: \"b24dae6c-3ca8-4404-8587-69276f17daf6\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.479940 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.480395 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-7gkpq"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.481197 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:03 crc kubenswrapper[5101]: E0122 09:54:03.481699 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:03.981675571 +0000 UTC m=+116.425305838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.490575 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.496202 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8996\" (UniqueName: \"kubernetes.io/projected/bd3171cb-920d-48bd-9653-6cd577a560bd-kube-api-access-k8996\") pod \"openshift-config-operator-5777786469-x59wv\" (UID: \"bd3171cb-920d-48bd-9653-6cd577a560bd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.510534 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rq87\" (UniqueName: \"kubernetes.io/projected/b36514f0-26f0-4728-ae25-65a5ba99d2fa-kube-api-access-6rq87\") pod \"ingress-canary-4q7cw\" (UID: \"b36514f0-26f0-4728-ae25-65a5ba99d2fa\") " pod="openshift-ingress-canary/ingress-canary-4q7cw" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.519714 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.536837 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.537304 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.561398 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-rgqgl"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.566743 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.578365 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-w2759"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.580956 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.583037 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:03 crc kubenswrapper[5101]: E0122 09:54:03.583594 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:04.083569577 +0000 UTC m=+116.527199884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.587583 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-w5j22"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.592924 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-66wpn"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.626020 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.642266 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.684315 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:03 crc kubenswrapper[5101]: E0122 09:54:03.684533 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:04.184494276 +0000 UTC m=+116.628124563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.684971 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:03 crc kubenswrapper[5101]: E0122 09:54:03.685389 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:04.185377541 +0000 UTC m=+116.629007818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.695446 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4q7cw" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.734606 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.745083 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.786614 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:03 crc kubenswrapper[5101]: E0122 09:54:03.786817 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:04.286774363 +0000 UTC m=+116.730404630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.787129 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:03 crc kubenswrapper[5101]: E0122 09:54:03.787498 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:04.287482343 +0000 UTC m=+116.731112610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.798672 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.812645 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.817107 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.819610 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2"] Jan 22 09:54:03 crc kubenswrapper[5101]: I0122 09:54:03.888950 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:03 crc kubenswrapper[5101]: E0122 09:54:03.890182 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:04.390156921 +0000 UTC m=+116.833787188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:03.992281 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:04 crc kubenswrapper[5101]: E0122 09:54:03.992898 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:04.49288514 +0000 UTC m=+116.936515407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.093196 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:04 crc kubenswrapper[5101]: E0122 09:54:04.093649 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:04.593630414 +0000 UTC m=+117.037260681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.135825 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9"] Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.195841 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.196232 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn"] Jan 22 09:54:04 crc kubenswrapper[5101]: E0122 09:54:04.196328 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:04.696303382 +0000 UTC m=+117.139933699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.263667 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs"] Jan 22 09:54:04 crc kubenswrapper[5101]: W0122 09:54:04.289162 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78ef6db0_8118_4e91_8d42_f4d7d1f82d32.slice/crio-b4daed10929c50e777bfbe06457d4a63af7ab05505fba93faba18b4950bc322f WatchSource:0}: Error finding container b4daed10929c50e777bfbe06457d4a63af7ab05505fba93faba18b4950bc322f: Status 404 returned error can't find the container with id b4daed10929c50e777bfbe06457d4a63af7ab05505fba93faba18b4950bc322f Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.298221 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:04 crc kubenswrapper[5101]: E0122 09:54:04.298646 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:04.79862395 +0000 UTC m=+117.242254217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.377922 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m"] Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.400080 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:04 crc kubenswrapper[5101]: E0122 09:54:04.400571 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:04.900551267 +0000 UTC m=+117.344181534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.481544 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" event={"ID":"16e791e1-266c-46d9-a6cb-d6c7e48d4df9","Type":"ContainerStarted","Data":"33d509e94ecb99fe74dff1726f53fd0a7bef2a9631981ac2c11ca991e435f8f7"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.482644 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" event={"ID":"78ef6db0-8118-4e91-8d42-f4d7d1f82d32","Type":"ContainerStarted","Data":"b4daed10929c50e777bfbe06457d4a63af7ab05505fba93faba18b4950bc322f"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.488584 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" event={"ID":"ba64c46a-5bbe-470e-8dcd-560c5f1ddf59","Type":"ContainerStarted","Data":"59ad453f0212ab81e5f25422cdb547deb030ba3894dd78ad991ce0b9fc59e9f0"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.492934 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" event={"ID":"bd43162d-92a7-42ed-8615-ce99aaf16067","Type":"ContainerStarted","Data":"7bb9d1d01ae8bb7842e3fc82dfc93750817f53c58f8feec3fc83df0f5d058adf"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.509498 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:04 crc kubenswrapper[5101]: E0122 09:54:04.510272 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:05.010250642 +0000 UTC m=+117.453880909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.558233 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" event={"ID":"1aa3720b-6520-49ef-96d2-bf634f1a5f8c","Type":"ContainerStarted","Data":"80e4e3c2a130aa0f6001ce39486c9405f35496273009427ec6cf24811173cc02"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.558290 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" event={"ID":"223b7c4c-942e-44bd-bf88-67db1adfed29","Type":"ContainerStarted","Data":"94fe1018f969d5c7d75d5f3006418290b9747c06adc97e91a64df285452126e8"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.563673 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l" event={"ID":"61c87129-51d7-446d-ac4a-d0f7c4e7a3f5","Type":"ContainerStarted","Data":"d68857f04aac008742ee593ab4b4ff0e280c4007073feb19526ac7cbae06346e"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.605521 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" event={"ID":"4484a02d-d1db-4408-806f-3116be160354","Type":"ContainerStarted","Data":"f79cd181b2f16a426fbcc35e79a8aa7b91b077e8a9fa78f4d3e81413aabf377e"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.607665 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-8hrzs" event={"ID":"79b5eb1b-bf45-47ce-992d-4c1bae056fc5","Type":"ContainerStarted","Data":"6ef1efa715dc8b32bc24c58e75946f5eefa926b58a5d8e43e97f4bbd23239b87"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.609121 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" event={"ID":"112f7c63-b876-4377-8418-18d8abc92100","Type":"ContainerStarted","Data":"39682ac7a55afdb4f21f89b18e89d7bd2d2c9099a7e62d6e64be8b91ea9e93b2"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.610985 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.611022 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4lvr8"] Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.611060 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" event={"ID":"1c81934b-984b-4537-b93e-ecec345fdf73","Type":"ContainerStarted","Data":"903dfe02bd9da97b56a10aeb120c9d347689ba5b853b5022109d2d3e978b6892"} Jan 22 09:54:04 crc kubenswrapper[5101]: E0122 09:54:04.611519 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:05.11150177 +0000 UTC m=+117.555132037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.614521 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" event={"ID":"3b1c8d56-3eac-4ef1-9d84-786af0465c79","Type":"ContainerStarted","Data":"8f8a0c69334251de3db40b31c3a4aadc4ea758bbb99b1742bf4d06aeb65ee9b8"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.617911 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-w2759" event={"ID":"ada11655-156b-4b1e-ad19-8391c89c8e6b","Type":"ContainerStarted","Data":"bdd9432d61717a5de1c5d346cbd83ba42a3bf3ccfc4720c43ed74497c9601285"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.620170 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" event={"ID":"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb","Type":"ContainerStarted","Data":"34d9b45318a79048dc3074903e235097e7d8189ab8c639e369da7ce8d554b9f3"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.627675 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" event={"ID":"8c20fd39-64f0-40d4-9c12-915763fddfde","Type":"ContainerStarted","Data":"02f9b89a22e584beb39af5ddc5fee50dab542a1c45ec381a77d966791226708d"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.636677 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" event={"ID":"40ddbf39-c363-4a9d-90d2-911b700eb8d1","Type":"ContainerStarted","Data":"f751687f75a1507ec32272c98ab1cf876437100c4d8375cfdebcdf0611e812c6"} Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.715690 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:04 crc kubenswrapper[5101]: E0122 09:54:04.717373 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:05.217339916 +0000 UTC m=+117.660970243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.850661 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:04 crc kubenswrapper[5101]: E0122 09:54:04.851000 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:05.350986879 +0000 UTC m=+117.794617146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:04 crc kubenswrapper[5101]: I0122 09:54:04.956125 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:04 crc kubenswrapper[5101]: E0122 09:54:04.956676 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:05.45665475 +0000 UTC m=+117.900285017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.059175 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:05 crc kubenswrapper[5101]: E0122 09:54:05.059592 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:05.559577945 +0000 UTC m=+118.003208212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.163031 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:05 crc kubenswrapper[5101]: E0122 09:54:05.170119 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:05.670078512 +0000 UTC m=+118.113708779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.271688 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:05 crc kubenswrapper[5101]: E0122 09:54:05.272149 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:05.772132672 +0000 UTC m=+118.215762939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.373382 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:05 crc kubenswrapper[5101]: E0122 09:54:05.373762 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:05.87372593 +0000 UTC m=+118.317356197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.374186 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:05 crc kubenswrapper[5101]: E0122 09:54:05.374578 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:05.874561023 +0000 UTC m=+118.318191290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.478050 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:05 crc kubenswrapper[5101]: E0122 09:54:05.478553 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:05.978531908 +0000 UTC m=+118.422162175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.676411 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:05 crc kubenswrapper[5101]: E0122 09:54:05.677364 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:06.177333901 +0000 UTC m=+118.620964168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:05 crc kubenswrapper[5101]: E0122 09:54:05.782068 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:06.282034284 +0000 UTC m=+118.725664551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.783881 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.784450 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:05 crc kubenswrapper[5101]: E0122 09:54:05.784966 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:06.284948356 +0000 UTC m=+118.728578623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.835148 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" event={"ID":"8e9fa7a6-9771-4006-a4fb-2ab86f9dd802","Type":"ContainerStarted","Data":"5e821e01907c4b9dc5e28edf62fee320059187595381d9eaf94066286e96ccda"} Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.841902 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4lvr8" event={"ID":"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c","Type":"ContainerStarted","Data":"87fdfe8f9143d273eb88297a74f9db1ff3951ee99246cb529da736cb1b030ee0"} Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.846308 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ss5t9"] Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.846679 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xzmmw" event={"ID":"31d304d5-f99c-4384-87d1-5ffffb5d2694","Type":"ContainerStarted","Data":"4043030c4fb5506d83b4d29a51b3af35d18e5166810fb0506fad633275f7d927"} Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.849672 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq"] Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.850913 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" event={"ID":"98b4c472-57be-436c-a925-427f5bc72fca","Type":"ContainerStarted","Data":"0080a96d7fc796329e90e49ec2e2a28e1e62d93c6a779473688903a4d88c68a9"} Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.854380 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-x59wv"] Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.854749 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-s2rsq" podStartSLOduration=94.854728385 podStartE2EDuration="1m34.854728385s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:05.853802789 +0000 UTC m=+118.297433076" watchObservedRunningTime="2026-01-22 09:54:05.854728385 +0000 UTC m=+118.298358652" Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.857941 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" event={"ID":"1d1245dc-9786-483a-a1b9-b187dafc3ab4","Type":"ContainerStarted","Data":"c1d5969c02f6c26aaabd8a364a645bd3089c85f686852af30328495a49c624f3"} Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.872923 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" event={"ID":"40ddbf39-c363-4a9d-90d2-911b700eb8d1","Type":"ContainerStarted","Data":"cd4ac467054f5fcbfcaabe10c2da739846f1265a4cf43bea7783e7a7aa25ddef"} Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.881862 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xzmmw" podStartSLOduration=5.881844932 podStartE2EDuration="5.881844932s" podCreationTimestamp="2026-01-22 09:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:05.880491904 +0000 UTC m=+118.324122191" watchObservedRunningTime="2026-01-22 09:54:05.881844932 +0000 UTC m=+118.325475189" Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.882675 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" event={"ID":"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63","Type":"ContainerStarted","Data":"4ade5dd7a20e56e0739a25f0253fe9aa2c58d73fbd2ae8f0f1d4b55453fdd54d"} Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.889483 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:05 crc kubenswrapper[5101]: E0122 09:54:05.890877 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:06.390856214 +0000 UTC m=+118.834486481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:05 crc kubenswrapper[5101]: W0122 09:54:05.956773 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb24dae6c_3ca8_4404_8587_69276f17daf6.slice/crio-87920dd947ea7eb45ee7663fb32f628dc53d6f2b1a36310efc418a30a1879806 WatchSource:0}: Error finding container 87920dd947ea7eb45ee7663fb32f628dc53d6f2b1a36310efc418a30a1879806: Status 404 returned error can't find the container with id 87920dd947ea7eb45ee7663fb32f628dc53d6f2b1a36310efc418a30a1879806 Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.961777 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8"] Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.979265 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m"] Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.987585 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr"] Jan 22 09:54:05 crc kubenswrapper[5101]: I0122 09:54:05.992449 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:05 crc kubenswrapper[5101]: E0122 09:54:05.992887 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:06.492870733 +0000 UTC m=+118.936501000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.000228 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-gl7dl"] Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.032377 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh"] Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.037125 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm"] Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.095209 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:06 crc kubenswrapper[5101]: E0122 09:54:06.096792 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:06.595662994 +0000 UTC m=+119.039293271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.097267 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:06 crc kubenswrapper[5101]: E0122 09:54:06.098344 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:06.598332899 +0000 UTC m=+119.041963166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:06 crc kubenswrapper[5101]: W0122 09:54:06.157747 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded3aad7f_c0d8_468f_838b_a3700c3e60b0.slice/crio-39edd357aa32e49a615e88e0eddb6e06298cf3f601c672e5cdcbe6b06f5d2316 WatchSource:0}: Error finding container 39edd357aa32e49a615e88e0eddb6e06298cf3f601c672e5cdcbe6b06f5d2316: Status 404 returned error can't find the container with id 39edd357aa32e49a615e88e0eddb6e06298cf3f601c672e5cdcbe6b06f5d2316 Jan 22 09:54:06 crc kubenswrapper[5101]: W0122 09:54:06.165945 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod568dbcc8_3ad6_4b41_acb0_8e4c28973db7.slice/crio-653dcf9d3c9eb88d5fe026102d8964b7bd332e7382d547798a15636f9333ca41 WatchSource:0}: Error finding container 653dcf9d3c9eb88d5fe026102d8964b7bd332e7382d547798a15636f9333ca41: Status 404 returned error can't find the container with id 653dcf9d3c9eb88d5fe026102d8964b7bd332e7382d547798a15636f9333ca41 Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.199300 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:06 crc kubenswrapper[5101]: E0122 09:54:06.199517 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:06.699492655 +0000 UTC m=+119.143122922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.199945 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:06 crc kubenswrapper[5101]: E0122 09:54:06.200756 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:06.700732109 +0000 UTC m=+119.144362406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.215502 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8"] Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.217126 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr"] Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.300897 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:06 crc kubenswrapper[5101]: E0122 09:54:06.301441 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:06.801398411 +0000 UTC m=+119.245028688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.319652 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-hwdqt"] Jan 22 09:54:06 crc kubenswrapper[5101]: W0122 09:54:06.353102 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddbd0830_d2a2_4f8a_84b4_74041a59ee10.slice/crio-88325df7d473478d5ff55c3f6930115a3a80b2ea840ca14279b55cd35500ae35 WatchSource:0}: Error finding container 88325df7d473478d5ff55c3f6930115a3a80b2ea840ca14279b55cd35500ae35: Status 404 returned error can't find the container with id 88325df7d473478d5ff55c3f6930115a3a80b2ea840ca14279b55cd35500ae35 Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.355121 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4q7cw"] Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.403704 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:06 crc kubenswrapper[5101]: E0122 09:54:06.404123 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:06.90410873 +0000 UTC m=+119.347738997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.501562 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-45b99"] Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.505353 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:06 crc kubenswrapper[5101]: E0122 09:54:06.505834 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:07.005810681 +0000 UTC m=+119.449440948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:06 crc kubenswrapper[5101]: W0122 09:54:06.524042 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb36514f0_26f0_4728_ae25_65a5ba99d2fa.slice/crio-5ebd12ab2b70870c6ec3a15fdaeedec446145cf76d5c3e0d46ef3d72469966cf WatchSource:0}: Error finding container 5ebd12ab2b70870c6ec3a15fdaeedec446145cf76d5c3e0d46ef3d72469966cf: Status 404 returned error can't find the container with id 5ebd12ab2b70870c6ec3a15fdaeedec446145cf76d5c3e0d46ef3d72469966cf Jan 22 09:54:06 crc kubenswrapper[5101]: W0122 09:54:06.526157 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c550c98_0e20_4316_8338_5268b336f2a2.slice/crio-9d1687c21b999d4292b8240337a3b567f58ed4e67a2ba88df178460a76f34263 WatchSource:0}: Error finding container 9d1687c21b999d4292b8240337a3b567f58ed4e67a2ba88df178460a76f34263: Status 404 returned error can't find the container with id 9d1687c21b999d4292b8240337a3b567f58ed4e67a2ba88df178460a76f34263 Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.540501 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2"] Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.569527 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8"] Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.572520 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-z4tq2"] Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.575569 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gf9jd"] Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.607019 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:06 crc kubenswrapper[5101]: E0122 09:54:06.607356 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:07.107341597 +0000 UTC m=+119.550971864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.708562 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:06 crc kubenswrapper[5101]: E0122 09:54:06.709339 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:07.209318955 +0000 UTC m=+119.652949222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:06 crc kubenswrapper[5101]: W0122 09:54:06.734104 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a368ef1_f996_42c8_ae62_a06dcff3e625.slice/crio-db52ff5d5e0521b8b579c110ad4146e3462704e40cb9221a3e3b104d6efc9a7c WatchSource:0}: Error finding container db52ff5d5e0521b8b579c110ad4146e3462704e40cb9221a3e3b104d6efc9a7c: Status 404 returned error can't find the container with id db52ff5d5e0521b8b579c110ad4146e3462704e40cb9221a3e3b104d6efc9a7c Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.811842 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:06 crc kubenswrapper[5101]: E0122 09:54:06.812264 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:07.31224941 +0000 UTC m=+119.755879677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.921284 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:06 crc kubenswrapper[5101]: E0122 09:54:06.922207 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:07.422181051 +0000 UTC m=+119.865811318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.956477 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" event={"ID":"901b7095-5e60-483e-996c-1d63888331ce","Type":"ContainerStarted","Data":"ab134b728a7d4531b5777b920d6634a0906c470926b7f89569b11ec138094f98"} Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.958766 5101 generic.go:358] "Generic (PLEG): container finished" podID="112f7c63-b876-4377-8418-18d8abc92100" containerID="b0f4cd2bc4a1e90f8f59121f38cedecb3d1dd5955f5e87533c55f2eb3f64fa45" exitCode=0 Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.958816 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" event={"ID":"112f7c63-b876-4377-8418-18d8abc92100","Type":"ContainerDied","Data":"b0f4cd2bc4a1e90f8f59121f38cedecb3d1dd5955f5e87533c55f2eb3f64fa45"} Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.967922 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" event={"ID":"98b4c472-57be-436c-a925-427f5bc72fca","Type":"ContainerStarted","Data":"4969380e855116615b6b7ca4255fcd713fb49cca4ece48d59e79bddecc987311"} Jan 22 09:54:06 crc kubenswrapper[5101]: I0122 09:54:06.981135 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" event={"ID":"b24dae6c-3ca8-4404-8587-69276f17daf6","Type":"ContainerStarted","Data":"87920dd947ea7eb45ee7663fb32f628dc53d6f2b1a36310efc418a30a1879806"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.034345 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" event={"ID":"3b1c8d56-3eac-4ef1-9d84-786af0465c79","Type":"ContainerStarted","Data":"3f8603ef9eab3c2889eb34dac799430c32bddd6616710255f1991822c535f97f"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.047896 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" event={"ID":"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb","Type":"ContainerStarted","Data":"779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.048549 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.050823 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" event={"ID":"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e","Type":"ContainerStarted","Data":"de37ce6d673a8ee392665104a230c2919103bd05a3b6c0044ad4d429dc5900ec"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.053446 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.055364 5101 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-flq7f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.055506 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" podUID="2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 22 09:54:07 crc kubenswrapper[5101]: E0122 09:54:07.060327 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:07.560285828 +0000 UTC m=+120.003916095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.082084 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qm8xn" podStartSLOduration=96.082055737 podStartE2EDuration="1m36.082055737s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:07.078760345 +0000 UTC m=+119.522390602" watchObservedRunningTime="2026-01-22 09:54:07.082055737 +0000 UTC m=+119.525686004" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.106798 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" event={"ID":"bd3171cb-920d-48bd-9653-6cd577a560bd","Type":"ContainerStarted","Data":"6591eb8fd09c59f5e53c9f6e40d1ad6e3fb7e7bf39e1ffa4bccfe39de85bc2a4"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.120225 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" podStartSLOduration=95.120199022 podStartE2EDuration="1m35.120199022s" podCreationTimestamp="2026-01-22 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:07.118872855 +0000 UTC m=+119.562503122" watchObservedRunningTime="2026-01-22 09:54:07.120199022 +0000 UTC m=+119.563829289" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.157035 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" podStartSLOduration=96.15701315 podStartE2EDuration="1m36.15701315s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:07.156475925 +0000 UTC m=+119.600106212" watchObservedRunningTime="2026-01-22 09:54:07.15701315 +0000 UTC m=+119.600643417" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.158278 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:07 crc kubenswrapper[5101]: E0122 09:54:07.159628 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:07.659592842 +0000 UTC m=+120.103223119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.172034 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:07 crc kubenswrapper[5101]: E0122 09:54:07.173086 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:07.673068259 +0000 UTC m=+120.116698526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.199233 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" event={"ID":"932ff910-1ca7-4354-a306-1ce5f15f4f92","Type":"ContainerStarted","Data":"949f6ddd12c0d5b6556e2e3903a2549b3377e310cb280f1da39fa58a4eb31a2e"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.254528 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-gl7dl" event={"ID":"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0","Type":"ContainerStarted","Data":"c65b12b189d74cfee8697ca445d7ed9ebd10dd018889fc656fa0f6bda6b850dc"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.273902 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:07 crc kubenswrapper[5101]: E0122 09:54:07.274252 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:07.774233375 +0000 UTC m=+120.217863642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.286456 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" event={"ID":"04938683-0667-47f5-8b0f-69dfb43c4c3a","Type":"ContainerStarted","Data":"4d7fc4a28e3ff93b3da28f302453bef3042e15f11524502bc5e2e8909f0ae772"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.299073 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" event={"ID":"16e791e1-266c-46d9-a6cb-d6c7e48d4df9","Type":"ContainerStarted","Data":"79480be7d0af0ff9be6d8ea5c0ccfc0f84e19f378235c9269038a674b3002cbe"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.301067 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.302115 5101 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-7gkpq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.302182 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" podUID="16e791e1-266c-46d9-a6cb-d6c7e48d4df9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.302837 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" event={"ID":"660347db-42cb-4f31-801d-97c3c3523f66","Type":"ContainerStarted","Data":"ce4d3e5813d5aee203635b713e270e43ce745851332eaf2e0938bb968d45d077"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.311459 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" event={"ID":"ba64c46a-5bbe-470e-8dcd-560c5f1ddf59","Type":"ContainerStarted","Data":"b067dba19ef4ee73e9320daac806cc276902bcab107a8053ac46b12e0520ff90"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.325292 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" podStartSLOduration=96.32526986 podStartE2EDuration="1m36.32526986s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:07.323242413 +0000 UTC m=+119.766872690" watchObservedRunningTime="2026-01-22 09:54:07.32526986 +0000 UTC m=+119.768900127" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.332139 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" event={"ID":"bd43162d-92a7-42ed-8615-ce99aaf16067","Type":"ContainerStarted","Data":"5d6621c3ee9f91c56a336852993bc1f6e112cd06df40012970b61532b6fbf874"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.336291 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" event={"ID":"1aa3720b-6520-49ef-96d2-bf634f1a5f8c","Type":"ContainerStarted","Data":"3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.337245 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.339039 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l" event={"ID":"61c87129-51d7-446d-ac4a-d0f7c4e7a3f5","Type":"ContainerStarted","Data":"0e13bda2fe83d932226094155912792215a14b223f29150f828391cd9b7dd2c2"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.340396 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" event={"ID":"ed3aad7f-c0d8-468f-838b-a3700c3e60b0","Type":"ContainerStarted","Data":"39edd357aa32e49a615e88e0eddb6e06298cf3f601c672e5cdcbe6b06f5d2316"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.347758 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" event={"ID":"a06e89cc-4b31-4452-95da-bcb17c66f029","Type":"ContainerStarted","Data":"19e54eb4ccb4238a9163ff6206593562efdbf73d532c3d276050639c28edb3c4"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.349917 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-bxwd2" podStartSLOduration=96.349860217 podStartE2EDuration="1m36.349860217s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:07.346762751 +0000 UTC m=+119.790393018" watchObservedRunningTime="2026-01-22 09:54:07.349860217 +0000 UTC m=+119.793490514" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.367755 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" event={"ID":"43dfdef8-e150-4eba-b790-6c9a395fba76","Type":"ContainerStarted","Data":"908efd83b1d222e32b0b2de371f9ce287cd0d4bc529bf541b286886b5db91c19"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.370614 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-8hrzs" event={"ID":"79b5eb1b-bf45-47ce-992d-4c1bae056fc5","Type":"ContainerStarted","Data":"546122c54da987ae1766007bff19137c1acf0ae8e1595137c2eae4f54b334c24"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.375473 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.376497 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:07 crc kubenswrapper[5101]: E0122 09:54:07.378321 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:07.878298211 +0000 UTC m=+120.321928538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.393000 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-45b99" event={"ID":"0c550c98-0e20-4316-8338-5268b336f2a2","Type":"ContainerStarted","Data":"9d1687c21b999d4292b8240337a3b567f58ed4e67a2ba88df178460a76f34263"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.396553 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" event={"ID":"1c81934b-984b-4537-b93e-ecec345fdf73","Type":"ContainerStarted","Data":"ef95527cfc8ad8fb7498c1647f3d6f047e706b9cdae7b7fbb3f7b65d4cf5dd5f"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.400039 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4q7cw" event={"ID":"b36514f0-26f0-4728-ae25-65a5ba99d2fa","Type":"ContainerStarted","Data":"5ebd12ab2b70870c6ec3a15fdaeedec446145cf76d5c3e0d46ef3d72469966cf"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.402511 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8" event={"ID":"4a368ef1-f996-42c8-ae62-a06dcff3e625","Type":"ContainerStarted","Data":"db52ff5d5e0521b8b579c110ad4146e3462704e40cb9221a3e3b104d6efc9a7c"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.410110 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" event={"ID":"d07fefdf-c0b8-488e-94ec-b54954cfacce","Type":"ContainerStarted","Data":"31773ff75f080078f47a29b0cd0346d380e77b997a1408f770ff21bb98c6ba1e"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.413057 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-w2759" event={"ID":"ada11655-156b-4b1e-ad19-8391c89c8e6b","Type":"ContainerStarted","Data":"aa346317e28affddb8798534b89e6cd17c995d4e4cca297ed1d891a3d6fe52f7"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.414439 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-w2759" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.416251 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-hwdqt" event={"ID":"1bf878a0-4591-4ee2-96e9-db36fe28422d","Type":"ContainerStarted","Data":"e89bf8778fdc40e1500ea64128fcea7a83e2e9a80eba3cd15f28ac96644573d0"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.422786 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-8hrzs" podStartSLOduration=96.422760843 podStartE2EDuration="1m36.422760843s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:07.400702307 +0000 UTC m=+119.844332574" watchObservedRunningTime="2026-01-22 09:54:07.422760843 +0000 UTC m=+119.866391110" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.430848 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" podStartSLOduration=96.430824178 podStartE2EDuration="1m36.430824178s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:07.374997519 +0000 UTC m=+119.818627786" watchObservedRunningTime="2026-01-22 09:54:07.430824178 +0000 UTC m=+119.874454445" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.424751 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.433866 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.434414 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.454931 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" event={"ID":"027bf0e3-cc9b-4a15-85ca-75cdb81a7a63","Type":"ContainerStarted","Data":"7066c85d66bd9c0d42d9beb269901cfca9559d5922ec194a751eb4ec34526ad2"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.479629 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:07 crc kubenswrapper[5101]: E0122 09:54:07.481497 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:07.981409661 +0000 UTC m=+120.425039928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.484946 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-w2759" podStartSLOduration=96.48492111 podStartE2EDuration="1m36.48492111s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:07.472297467 +0000 UTC m=+119.915927744" watchObservedRunningTime="2026-01-22 09:54:07.48492111 +0000 UTC m=+119.928551387" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.524309 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" event={"ID":"78ef6db0-8118-4e91-8d42-f4d7d1f82d32","Type":"ContainerStarted","Data":"2dd3b82f9acfa33f78632f03bf19363696470d85cfa0c8570ef82f7e48fa753a"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.583551 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:07 crc kubenswrapper[5101]: E0122 09:54:07.585152 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:08.085134229 +0000 UTC m=+120.528764546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.619870 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" event={"ID":"223b7c4c-942e-44bd-bf88-67db1adfed29","Type":"ContainerStarted","Data":"8cde93f8d70ccf2ed54ff49c49a58fd1ec9c64c3c4ef360825a5bb7b635201e9"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.621117 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.627618 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" event={"ID":"ddbd0830-d2a2-4f8a-84b4-74041a59ee10","Type":"ContainerStarted","Data":"88325df7d473478d5ff55c3f6930115a3a80b2ea840ca14279b55cd35500ae35"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.627692 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.633992 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" event={"ID":"568dbcc8-3ad6-4b41-acb0-8e4c28973db7","Type":"ContainerStarted","Data":"653dcf9d3c9eb88d5fe026102d8964b7bd332e7382d547798a15636f9333ca41"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.634106 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.634774 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" podStartSLOduration=7.634749435 podStartE2EDuration="7.634749435s" podCreationTimestamp="2026-01-22 09:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:07.50605577 +0000 UTC m=+119.949686037" watchObservedRunningTime="2026-01-22 09:54:07.634749435 +0000 UTC m=+120.078379692" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.635051 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podStartSLOduration=96.635043333 podStartE2EDuration="1m36.635043333s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:07.632138512 +0000 UTC m=+120.075768779" watchObservedRunningTime="2026-01-22 09:54:07.635043333 +0000 UTC m=+120.078673600" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.655087 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" event={"ID":"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f","Type":"ContainerStarted","Data":"51ebb5820ae879416ef51ef018295ebd065a63e54f891c77332a80228e1a4ad8"} Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.685830 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4lbz9" podStartSLOduration=95.685346638 podStartE2EDuration="1m35.685346638s" podCreationTimestamp="2026-01-22 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:07.654530007 +0000 UTC m=+120.098160284" watchObservedRunningTime="2026-01-22 09:54:07.685346638 +0000 UTC m=+120.128976905" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.696063 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:07 crc kubenswrapper[5101]: E0122 09:54:07.697856 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:08.197826197 +0000 UTC m=+120.641456464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.712850 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-k4jfz" podStartSLOduration=96.712832666 podStartE2EDuration="1m36.712832666s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:07.710439169 +0000 UTC m=+120.154069426" watchObservedRunningTime="2026-01-22 09:54:07.712832666 +0000 UTC m=+120.156462933" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.803302 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:07 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:07 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:07 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.803386 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.804377 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:07 crc kubenswrapper[5101]: E0122 09:54:07.804707 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:08.304693971 +0000 UTC m=+120.748324238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.908265 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:07 crc kubenswrapper[5101]: E0122 09:54:07.908947 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:08.408906892 +0000 UTC m=+120.852537169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:07 crc kubenswrapper[5101]: I0122 09:54:07.909505 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:07 crc kubenswrapper[5101]: E0122 09:54:07.909943 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:08.409929961 +0000 UTC m=+120.853560218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.023864 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:08 crc kubenswrapper[5101]: E0122 09:54:08.024411 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:08.524385858 +0000 UTC m=+120.968016125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.125322 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:08 crc kubenswrapper[5101]: E0122 09:54:08.126215 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:08.626200742 +0000 UTC m=+121.069831009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.227138 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:08 crc kubenswrapper[5101]: E0122 09:54:08.227747 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:08.727725278 +0000 UTC m=+121.171355555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.329566 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:08 crc kubenswrapper[5101]: E0122 09:54:08.330756 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:08.830728505 +0000 UTC m=+121.274358772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.377670 5101 patch_prober.go:28] interesting pod/console-operator-67c89758df-8hrzs container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.377741 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-8hrzs" podUID="79b5eb1b-bf45-47ce-992d-4c1bae056fc5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.433075 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:08 crc kubenswrapper[5101]: E0122 09:54:08.445085 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:08.945050698 +0000 UTC m=+121.388680965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.554345 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:08 crc kubenswrapper[5101]: E0122 09:54:08.555767 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:09.05574271 +0000 UTC m=+121.499372977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.658261 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:08 crc kubenswrapper[5101]: E0122 09:54:08.658971 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:09.158945853 +0000 UTC m=+121.602576120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.679568 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:08 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:08 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:08 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.679712 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.771287 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:08 crc kubenswrapper[5101]: E0122 09:54:08.793923 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:09.293892932 +0000 UTC m=+121.737523199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.871924 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:08 crc kubenswrapper[5101]: E0122 09:54:08.872368 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:09.372342213 +0000 UTC m=+121.815972480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.881246 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" event={"ID":"568dbcc8-3ad6-4b41-acb0-8e4c28973db7","Type":"ContainerStarted","Data":"822d05fe23aa296548da09bf42a1462cbc17dac5804bf931f7140419e6ab7fa4"} Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.881293 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l6rf4"] Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.946969 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" event={"ID":"4484a02d-d1db-4408-806f-3116be160354","Type":"ContainerStarted","Data":"f33ced505d6807c234a95da4c018dbfddb76b8d4fc73106008bf7e937267d3a0"} Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.947044 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" event={"ID":"4484a02d-d1db-4408-806f-3116be160354","Type":"ContainerStarted","Data":"b95a36b754d1c9005b245fefb13ce00b65dcd19fdcf3d3bbf154a29a61daae5e"} Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.949302 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.976893 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:08 crc kubenswrapper[5101]: I0122 09:54:08.985624 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4lvr8" event={"ID":"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c","Type":"ContainerStarted","Data":"a1058a2f0d0dcf5f030d314d3df3e3955929b7e5f4bfbe328db4b069d77fcb2c"} Jan 22 09:54:08 crc kubenswrapper[5101]: E0122 09:54:08.992791 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:09.492771447 +0000 UTC m=+121.936401704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.080273 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:09 crc kubenswrapper[5101]: E0122 09:54:09.087855 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:09.587828842 +0000 UTC m=+122.031459109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.094043 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" event={"ID":"98b4c472-57be-436c-a925-427f5bc72fca","Type":"ContainerStarted","Data":"e6010f64c523a6b2e120abf75de16003f9e4404425c61ed79bf9b6f0809e94c1"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.113912 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" event={"ID":"1d1245dc-9786-483a-a1b9-b187dafc3ab4","Type":"ContainerStarted","Data":"bee7b0fff56bb67888b97ddc28980ab3a0ba049db9d4c1f18b925c16f9e06026"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.219996 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" event={"ID":"b24dae6c-3ca8-4404-8587-69276f17daf6","Type":"ContainerStarted","Data":"c317530ad19d48f974778915d151586aaed1772ad01705a716f52f2dec02936a"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.221301 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:09 crc kubenswrapper[5101]: E0122 09:54:09.222194 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:09.722174505 +0000 UTC m=+122.165804772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.272216 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.272260 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" event={"ID":"e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e","Type":"ContainerStarted","Data":"f61a5f7ea1ec26bf9f4bcdba0f4c7316a73553318ff04f822c06914b8397c508"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.336776 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-lgbgn" event={"ID":"8c20fd39-64f0-40d4-9c12-915763fddfde","Type":"ContainerStarted","Data":"853abd84de3c14af99e4c71327f86155a72cf3589b158847ae8abfced9afb82b"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.340109 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:09 crc kubenswrapper[5101]: E0122 09:54:09.341648 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:09.841622071 +0000 UTC m=+122.285252348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.437034 5101 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-kxcn8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.437122 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" podUID="e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.442378 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:09 crc kubenswrapper[5101]: E0122 09:54:09.444230 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:09.944211607 +0000 UTC m=+122.387841874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.477207 5101 generic.go:358] "Generic (PLEG): container finished" podID="bd3171cb-920d-48bd-9653-6cd577a560bd" containerID="5ec45c748736243bc40cb807de30b4fc57a955361087dcd68edde480200ecd37" exitCode=0 Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.477332 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" event={"ID":"bd3171cb-920d-48bd-9653-6cd577a560bd","Type":"ContainerDied","Data":"5ec45c748736243bc40cb807de30b4fc57a955361087dcd68edde480200ecd37"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.537627 5101 scope.go:117] "RemoveContainer" containerID="f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf" Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.552443 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:09 crc kubenswrapper[5101]: E0122 09:54:09.554052 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:10.054031984 +0000 UTC m=+122.497662251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.605855 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-gl7dl" event={"ID":"d93b1df8-3fed-437c-a7ff-7fea2a61fcb0","Type":"ContainerStarted","Data":"b358c14403783b4266de16e8b52f2795b51db226e7db02c2321b053cdbc3494f"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.620113 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" event={"ID":"04938683-0667-47f5-8b0f-69dfb43c4c3a","Type":"ContainerStarted","Data":"16fd81937bbe822cbc8169c171e271f356f31311d7845e639279ad2b66623811"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.635788 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:09 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:09 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:09 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.635875 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.638455 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" event={"ID":"660347db-42cb-4f31-801d-97c3c3523f66","Type":"ContainerStarted","Data":"10fa935f9d1039ecc414469f6f11f5ac279b1e0d1c48fedd16975929546a8819"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.639678 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.658849 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:09 crc kubenswrapper[5101]: E0122 09:54:09.660728 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:10.160712604 +0000 UTC m=+122.604342871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.672480 5101 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-79dz2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.672572 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" podUID="660347db-42cb-4f31-801d-97c3c3523f66" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.681396 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" event={"ID":"a06e89cc-4b31-4452-95da-bcb17c66f029","Type":"ContainerStarted","Data":"c665494e4a9ee51f059e8d198e69d18a0a8e2f6594c4f3fc40d46cbf83b0a811"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.733819 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" event={"ID":"43dfdef8-e150-4eba-b790-6c9a395fba76","Type":"ContainerStarted","Data":"71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.735031 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.760331 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:09 crc kubenswrapper[5101]: E0122 09:54:09.760764 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:10.260737838 +0000 UTC m=+122.704368105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.765050 5101 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-ss5t9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.765118 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" podUID="43dfdef8-e150-4eba-b790-6c9a395fba76" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.790906 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-45b99" event={"ID":"0c550c98-0e20-4316-8338-5268b336f2a2","Type":"ContainerStarted","Data":"b4605eb1c900338946de660e795b959357a1ebd8f101f2894eeb9a123b5b6b79"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.853793 5101 generic.go:358] "Generic (PLEG): container finished" podID="1c81934b-984b-4537-b93e-ecec345fdf73" containerID="ef95527cfc8ad8fb7498c1647f3d6f047e706b9cdae7b7fbb3f7b65d4cf5dd5f" exitCode=0 Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.853918 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" event={"ID":"1c81934b-984b-4537-b93e-ecec345fdf73","Type":"ContainerDied","Data":"ef95527cfc8ad8fb7498c1647f3d6f047e706b9cdae7b7fbb3f7b65d4cf5dd5f"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.855551 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h6k4m" podStartSLOduration=98.855506205 podStartE2EDuration="1m38.855506205s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:09.85175456 +0000 UTC m=+122.295384827" watchObservedRunningTime="2026-01-22 09:54:09.855506205 +0000 UTC m=+122.299136502" Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.863074 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:09 crc kubenswrapper[5101]: E0122 09:54:09.865234 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:10.365217596 +0000 UTC m=+122.808847863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.957531 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4q7cw" event={"ID":"b36514f0-26f0-4728-ae25-65a5ba99d2fa","Type":"ContainerStarted","Data":"b9e8403c0784272e2b5c9b9d8039f6bd02324be8493bf11a2c22aa7557d18163"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.968011 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:09 crc kubenswrapper[5101]: E0122 09:54:09.974244 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:10.47418128 +0000 UTC m=+122.917811547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.981649 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" event={"ID":"d07fefdf-c0b8-488e-94ec-b54954cfacce","Type":"ContainerStarted","Data":"d73ad9a85b195c8080962a4c0f7fc5557b2c8a8fa7590983e502bc09305db153"} Jan 22 09:54:09 crc kubenswrapper[5101]: I0122 09:54:09.984784 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.002304 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" podStartSLOduration=99.002277395 podStartE2EDuration="1m39.002277395s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:09.900920553 +0000 UTC m=+122.344550820" watchObservedRunningTime="2026-01-22 09:54:10.002277395 +0000 UTC m=+122.445907662" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.016996 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-hwdqt" event={"ID":"1bf878a0-4591-4ee2-96e9-db36fe28422d","Type":"ContainerStarted","Data":"c832db28f7d2cc589c16b4bec51112eff5f16234f34e1c58a1e7e38d896599bd"} Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.034698 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bc6vs" podStartSLOduration=99.03468082 podStartE2EDuration="1m39.03468082s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:10.033363503 +0000 UTC m=+122.476993770" watchObservedRunningTime="2026-01-22 09:54:10.03468082 +0000 UTC m=+122.478311087" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.076526 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.077786 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-gl7dl" podStartSLOduration=98.077770463 podStartE2EDuration="1m38.077770463s" podCreationTimestamp="2026-01-22 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:10.074562414 +0000 UTC m=+122.518192681" watchObservedRunningTime="2026-01-22 09:54:10.077770463 +0000 UTC m=+122.521400730" Jan 22 09:54:10 crc kubenswrapper[5101]: E0122 09:54:10.079043 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:10.579027198 +0000 UTC m=+123.022657465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.089675 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" event={"ID":"40ddbf39-c363-4a9d-90d2-911b700eb8d1","Type":"ContainerStarted","Data":"cd95eef858f9682bea0c88b17dbedad33ad8154a6505d1bdc86f5908deb63b20"} Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.100741 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" podStartSLOduration=98.100716174 podStartE2EDuration="1m38.100716174s" podCreationTimestamp="2026-01-22 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:10.098011119 +0000 UTC m=+122.541641386" watchObservedRunningTime="2026-01-22 09:54:10.100716174 +0000 UTC m=+122.544346441" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.119721 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" event={"ID":"a6a20a61-7a61-4f52-b57c-c289c661f268","Type":"ContainerStarted","Data":"397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4"} Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.121318 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.121390 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.134225 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.134712 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-8hrzs" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.159252 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-k7lkq" podStartSLOduration=99.159225529 podStartE2EDuration="1m39.159225529s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:10.156181364 +0000 UTC m=+122.599811631" watchObservedRunningTime="2026-01-22 09:54:10.159225529 +0000 UTC m=+122.602855796" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.180347 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:10 crc kubenswrapper[5101]: E0122 09:54:10.182084 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:10.682066547 +0000 UTC m=+123.125696814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.223102 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" podStartSLOduration=99.223085802 podStartE2EDuration="1m39.223085802s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:10.221969431 +0000 UTC m=+122.665599698" watchObservedRunningTime="2026-01-22 09:54:10.223085802 +0000 UTC m=+122.666716069" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.224114 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" podStartSLOduration=98.224104601 podStartE2EDuration="1m38.224104601s" podCreationTimestamp="2026-01-22 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:10.198845835 +0000 UTC m=+122.642476132" watchObservedRunningTime="2026-01-22 09:54:10.224104601 +0000 UTC m=+122.667734878" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.255867 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" podStartSLOduration=98.255850848 podStartE2EDuration="1m38.255850848s" podCreationTimestamp="2026-01-22 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:10.25415994 +0000 UTC m=+122.697790207" watchObservedRunningTime="2026-01-22 09:54:10.255850848 +0000 UTC m=+122.699481115" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.283323 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:10 crc kubenswrapper[5101]: E0122 09:54:10.283775 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:10.783758167 +0000 UTC m=+123.227388434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.288375 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" podStartSLOduration=98.288335525 podStartE2EDuration="1m38.288335525s" podCreationTimestamp="2026-01-22 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:10.28244366 +0000 UTC m=+122.726073927" watchObservedRunningTime="2026-01-22 09:54:10.288335525 +0000 UTC m=+122.731965792" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.349461 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.396763 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:10 crc kubenswrapper[5101]: E0122 09:54:10.397295 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:10.897263568 +0000 UTC m=+123.340893835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.397391 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:10 crc kubenswrapper[5101]: E0122 09:54:10.397929 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:10.897906195 +0000 UTC m=+123.341536472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.499988 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:10 crc kubenswrapper[5101]: E0122 09:54:10.527050 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:11.02697143 +0000 UTC m=+123.470601707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.544400 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:10 crc kubenswrapper[5101]: E0122 09:54:10.545994 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:11.045971751 +0000 UTC m=+123.489602018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.654607 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:10 crc kubenswrapper[5101]: E0122 09:54:10.655222 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:11.155193562 +0000 UTC m=+123.598823839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.697080 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:10 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:10 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:10 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.697230 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.803386 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:10 crc kubenswrapper[5101]: E0122 09:54:10.804115 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:11.304072791 +0000 UTC m=+123.747703058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:10 crc kubenswrapper[5101]: I0122 09:54:10.991970 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:11 crc kubenswrapper[5101]: E0122 09:54:11.000565 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:11.500507307 +0000 UTC m=+123.944137584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.000885 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:11 crc kubenswrapper[5101]: E0122 09:54:11.001390 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:11.501363331 +0000 UTC m=+123.944993608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.049500 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-hwdqt" podStartSLOduration=100.049471855 podStartE2EDuration="1m40.049471855s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:10.801991812 +0000 UTC m=+123.245622099" watchObservedRunningTime="2026-01-22 09:54:11.049471855 +0000 UTC m=+123.493102122" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.051132 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.102006 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:11 crc kubenswrapper[5101]: E0122 09:54:11.102487 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:11.602469275 +0000 UTC m=+124.046099542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.137301 5101 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-7gkpq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": context deadline exceeded" start-of-body= Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.137368 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" podUID="16e791e1-266c-46d9-a6cb-d6c7e48d4df9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": context deadline exceeded" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.165289 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.180166 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" event={"ID":"ba64c46a-5bbe-470e-8dcd-560c5f1ddf59","Type":"ContainerStarted","Data":"befa0efde24f42f3f8dd444a495b2fcc318d53d4489e67fb593f99a9b76824f8"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.215104 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l" event={"ID":"61c87129-51d7-446d-ac4a-d0f7c4e7a3f5","Type":"ContainerStarted","Data":"132f98ee4384d5a7cc6026ab1aad1f5c68462dcb9c17f3ecb9e451858d969f0f"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.218575 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:11 crc kubenswrapper[5101]: E0122 09:54:11.218936 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:11.718919598 +0000 UTC m=+124.162549865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.230622 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" event={"ID":"ed3aad7f-c0d8-468f-838b-a3700c3e60b0","Type":"ContainerStarted","Data":"4c5c998d33377ed8da1b3062ee202a7bb3c2b58cb8c8f7472e17a31bbe3ed099"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.249784 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" event={"ID":"a06e89cc-4b31-4452-95da-bcb17c66f029","Type":"ContainerStarted","Data":"1a70854e77cce77e0ac07b04de988bae238ce533aa5b3e31bb743ac081caceec"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.268012 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.268269 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.279836 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-45b99" event={"ID":"0c550c98-0e20-4316-8338-5268b336f2a2","Type":"ContainerStarted","Data":"b26338fe14bb14c7ce85eeb68e0f8d3bddce2e10bda1c56407b138a8c89c288b"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.313498 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" event={"ID":"1c81934b-984b-4537-b93e-ecec345fdf73","Type":"ContainerStarted","Data":"a784f9a0ae5d8ddef0a71398ddff9a59df167d35b56e2db370cc30baf7eb94b7"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.319879 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.320285 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f642d6e-f3f5-4551-b1c7-ccf416fe502b-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"5f642d6e-f3f5-4551-b1c7-ccf416fe502b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.320523 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f642d6e-f3f5-4551-b1c7-ccf416fe502b-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"5f642d6e-f3f5-4551-b1c7-ccf416fe502b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 09:54:11 crc kubenswrapper[5101]: E0122 09:54:11.320922 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:11.820863526 +0000 UTC m=+124.264493793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.431315 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f642d6e-f3f5-4551-b1c7-ccf416fe502b-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"5f642d6e-f3f5-4551-b1c7-ccf416fe502b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.431494 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f642d6e-f3f5-4551-b1c7-ccf416fe502b-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"5f642d6e-f3f5-4551-b1c7-ccf416fe502b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.431828 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:11 crc kubenswrapper[5101]: E0122 09:54:11.432465 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:11.932442272 +0000 UTC m=+124.376072539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.434601 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f642d6e-f3f5-4551-b1c7-ccf416fe502b-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"5f642d6e-f3f5-4551-b1c7-ccf416fe502b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.435830 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.454526 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8" event={"ID":"4a368ef1-f996-42c8-ae62-a06dcff3e625","Type":"ContainerStarted","Data":"06623f3a8898a83f8026e2c03853796152f18ca94fa952b281f5cba024a94e49"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.461109 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" event={"ID":"ddbd0830-d2a2-4f8a-84b4-74041a59ee10","Type":"ContainerStarted","Data":"26e48d56a6f09577ab7aacce81a8b8b8884a9c2cff6fad14cda5dc780c1ca9c4"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.470980 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-mgr24" event={"ID":"9ae39b7f-ed42-4d00-b3d2-2f96abd7b64f","Type":"ContainerStarted","Data":"0b5b3432d4f1e0902284fe108a4d1e5869c212edd2efe4d7fbfb929aab0a8036"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.481613 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4lvr8" event={"ID":"4bcaae32-6fca-4120-8ca7-d9f5f709cb4c","Type":"ContainerStarted","Data":"7135ba04d8d592060171b4ffa8747e8c751271b934c1550c49ff387c7dead063"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.482277 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.494870 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" event={"ID":"901b7095-5e60-483e-996c-1d63888331ce","Type":"ContainerStarted","Data":"979bcc42685dc4c7e9f2f3dbbf9816ed282fc3c03aa4dc258f6d5b6a798afd85"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.505223 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" event={"ID":"112f7c63-b876-4377-8418-18d8abc92100","Type":"ContainerStarted","Data":"ee636cf146e425330e9c146d9c2440e2c9de4f37b8501bead6aa3e81b8a7be32"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.509434 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.513053 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" event={"ID":"04938683-0667-47f5-8b0f-69dfb43c4c3a","Type":"ContainerStarted","Data":"5d61b73b1589c5d801d584b6836476dcd2c34066bd9ad471832cea30eed77927"} Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.522364 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" podUID="a6a20a61-7a61-4f52-b57c-c289c661f268" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" gracePeriod=30 Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.524507 5101 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-ss5t9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.524551 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" podUID="43dfdef8-e150-4eba-b790-6c9a395fba76" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.534732 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:11 crc kubenswrapper[5101]: E0122 09:54:11.537146 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:12.037124486 +0000 UTC m=+124.480754753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.663183 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:11 crc kubenswrapper[5101]: E0122 09:54:11.663903 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:12.163888637 +0000 UTC m=+124.607518904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.735250 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f642d6e-f3f5-4551-b1c7-ccf416fe502b-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"5f642d6e-f3f5-4551-b1c7-ccf416fe502b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.735902 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:11 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:11 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:11 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.736658 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.766524 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:11 crc kubenswrapper[5101]: E0122 09:54:11.766992 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:12.266958856 +0000 UTC m=+124.710589123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.841268 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-66wpn" podStartSLOduration=100.841248371 podStartE2EDuration="1m40.841248371s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:11.785211186 +0000 UTC m=+124.228841453" watchObservedRunningTime="2026-01-22 09:54:11.841248371 +0000 UTC m=+124.284878638" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.841689 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4q7cw" podStartSLOduration=11.841682033 podStartE2EDuration="11.841682033s" podCreationTimestamp="2026-01-22 09:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:11.838964138 +0000 UTC m=+124.282594425" watchObservedRunningTime="2026-01-22 09:54:11.841682033 +0000 UTC m=+124.285312290" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.868518 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:11 crc kubenswrapper[5101]: E0122 09:54:11.868973 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:12.368953215 +0000 UTC m=+124.812583482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.893307 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.898706 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-kx9c8" podStartSLOduration=99.898678625 podStartE2EDuration="1m39.898678625s" podCreationTimestamp="2026-01-22 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:11.89597785 +0000 UTC m=+124.339608137" watchObservedRunningTime="2026-01-22 09:54:11.898678625 +0000 UTC m=+124.342308892" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.938888 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.940588 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.946692 5101 patch_prober.go:28] interesting pod/apiserver-8596bd845d-bbf9g container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.946761 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" podUID="1c81934b-984b-4537-b93e-ecec345fdf73" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.969797 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-4lvr8" podStartSLOduration=11.969781562 podStartE2EDuration="11.969781562s" podCreationTimestamp="2026-01-22 09:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:11.966808759 +0000 UTC m=+124.410439036" watchObservedRunningTime="2026-01-22 09:54:11.969781562 +0000 UTC m=+124.413411829" Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.971091 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:11 crc kubenswrapper[5101]: E0122 09:54:11.971557 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:12.471539001 +0000 UTC m=+124.915169268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:11 crc kubenswrapper[5101]: I0122 09:54:11.998349 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-j478l" podStartSLOduration=100.998330559 podStartE2EDuration="1m40.998330559s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:11.993850564 +0000 UTC m=+124.437480841" watchObservedRunningTime="2026-01-22 09:54:11.998330559 +0000 UTC m=+124.441960826" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.032633 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-45b99" podStartSLOduration=101.032607206 podStartE2EDuration="1m41.032607206s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:12.021939428 +0000 UTC m=+124.465569695" watchObservedRunningTime="2026-01-22 09:54:12.032607206 +0000 UTC m=+124.476237483" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.050184 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jwllh" podStartSLOduration=101.050152237 podStartE2EDuration="1m41.050152237s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:12.046212796 +0000 UTC m=+124.489843073" watchObservedRunningTime="2026-01-22 09:54:12.050152237 +0000 UTC m=+124.493782514" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.074084 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.074519 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:12.574501197 +0000 UTC m=+125.018131464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.100342 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" podStartSLOduration=101.100319278 podStartE2EDuration="1m41.100319278s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:12.082101249 +0000 UTC m=+124.525731526" watchObservedRunningTime="2026-01-22 09:54:12.100319278 +0000 UTC m=+124.543949555" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.108886 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.108966 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.114898 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8frxr" podStartSLOduration=101.114877924 podStartE2EDuration="1m41.114877924s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:12.114303428 +0000 UTC m=+124.557933705" watchObservedRunningTime="2026-01-22 09:54:12.114877924 +0000 UTC m=+124.558508191" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.175191 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.175688 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:12.675667642 +0000 UTC m=+125.119297909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.181754 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-8z65m" podStartSLOduration=101.181730762 podStartE2EDuration="1m41.181730762s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:12.14014202 +0000 UTC m=+124.583772297" watchObservedRunningTime="2026-01-22 09:54:12.181730762 +0000 UTC m=+124.625361029" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.215450 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-96sjm" podStartSLOduration=101.215408412 podStartE2EDuration="1m41.215408412s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:12.18238438 +0000 UTC m=+124.626014647" watchObservedRunningTime="2026-01-22 09:54:12.215408412 +0000 UTC m=+124.659038689" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.244199 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" podStartSLOduration=100.244180076 podStartE2EDuration="1m40.244180076s" podCreationTimestamp="2026-01-22 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:12.242691525 +0000 UTC m=+124.686321802" watchObservedRunningTime="2026-01-22 09:54:12.244180076 +0000 UTC m=+124.687810343" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.245122 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqgl" podStartSLOduration=101.245115062 podStartE2EDuration="1m41.245115062s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:12.216064611 +0000 UTC m=+124.659694878" watchObservedRunningTime="2026-01-22 09:54:12.245115062 +0000 UTC m=+124.688745329" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.266219 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mwpd8" podStartSLOduration=101.266200151 podStartE2EDuration="1m41.266200151s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:12.262710554 +0000 UTC m=+124.706340811" watchObservedRunningTime="2026-01-22 09:54:12.266200151 +0000 UTC m=+124.709830408" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.288738 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.289173 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:12.789157422 +0000 UTC m=+125.232787689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.389581 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.389746 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:12.889718582 +0000 UTC m=+125.333348849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.389984 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.390493 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:12.890467922 +0000 UTC m=+125.334098189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.481400 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.491337 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.491799 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:12.991761722 +0000 UTC m=+125.435391989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.491892 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.492326 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:12.992305807 +0000 UTC m=+125.435936074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.514148 5101 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-7gkpq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.514244 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" podUID="16e791e1-266c-46d9-a6cb-d6c7e48d4df9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.518782 5101 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-kxcn8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.518836 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" podUID="e7b2b320-c3fe-4bab-b6b7-2a2b56c6be8e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.556683 5101 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-79dz2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.557092 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" podUID="660347db-42cb-4f31-801d-97c3c3523f66" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.593396 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.593768 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:13.0937425 +0000 UTC m=+125.537372767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.594152 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.594617 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:13.094603314 +0000 UTC m=+125.538233581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.631023 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:12 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:12 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:12 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.631672 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.652817 5101 ???:1] "http: TLS handshake error from 192.168.126.11:39940: no serving certificate available for the kubelet" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.714946 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.715375 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:13.215347767 +0000 UTC m=+125.658978024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.716357 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.717064 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:13.217041154 +0000 UTC m=+125.660671421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.740681 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" event={"ID":"bd3171cb-920d-48bd-9653-6cd577a560bd","Type":"ContainerStarted","Data":"2d23acb33682402d56bf75d0dceb96eca6c75b9fbf2002b8bd53e9ba248a7ca2"} Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.754182 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" event={"ID":"112f7c63-b876-4377-8418-18d8abc92100","Type":"ContainerStarted","Data":"1e21386a400fb7716352e92d3e08e5f9c75f07760c57dad58733c9349d1da9e0"} Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.764737 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.767224 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1e7f72084ec907351fbae268053f8b9ac43c75cf18b58bcc511edfc2afef474e"} Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.790030 5101 ???:1] "http: TLS handshake error from 192.168.126.11:39950: no serving certificate available for the kubelet" Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.817770 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.818210 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:13.31819181 +0000 UTC m=+125.761822077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:12 crc kubenswrapper[5101]: I0122 09:54:12.919806 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:12 crc kubenswrapper[5101]: E0122 09:54:12.920203 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:13.420188798 +0000 UTC m=+125.863819065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.014611 5101 ???:1] "http: TLS handshake error from 192.168.126.11:39954: no serving certificate available for the kubelet" Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.021114 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.021508 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:13.521475447 +0000 UTC m=+125.965105714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.021790 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.022242 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:13.522213218 +0000 UTC m=+125.965843485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.201559 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.201755 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:13.701724392 +0000 UTC m=+126.145354659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.202020 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.202601 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:13.702575045 +0000 UTC m=+126.146205322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.209413 5101 ???:1] "http: TLS handshake error from 192.168.126.11:39956: no serving certificate available for the kubelet" Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.232097 5101 ???:1] "http: TLS handshake error from 192.168.126.11:39962: no serving certificate available for the kubelet" Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.281640 5101 ???:1] "http: TLS handshake error from 192.168.126.11:39976: no serving certificate available for the kubelet" Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.303639 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.304096 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:13.80406804 +0000 UTC m=+126.247698307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.405152 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.405591 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:13.905574996 +0000 UTC m=+126.349205263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.443591 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p79nv"] Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.456724 5101 patch_prober.go:28] interesting pod/console-64d44f6ddf-hwdqt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.456796 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-hwdqt" podUID="1bf878a0-4591-4ee2-96e9-db36fe28422d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.506347 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.506498 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:14.006469924 +0000 UTC m=+126.450100191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.506903 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.507269 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:14.007252446 +0000 UTC m=+126.450882713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.575253 5101 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-ss5t9 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.575366 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" podUID="43dfdef8-e150-4eba-b790-6c9a395fba76" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.618606 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.618930 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:14.118905355 +0000 UTC m=+126.562535652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.648709 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:13 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:13 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:13 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.648805 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.677736 5101 ???:1] "http: TLS handshake error from 192.168.126.11:39988: no serving certificate available for the kubelet" Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.719955 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.720391 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:14.220376749 +0000 UTC m=+126.664007016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.821200 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.821361 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:14.321333449 +0000 UTC m=+126.764963716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.821891 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.822214 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:14.322201753 +0000 UTC m=+126.765832020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.925372 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.925966 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:14.42592784 +0000 UTC m=+126.869558107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:13 crc kubenswrapper[5101]: I0122 09:54:13.926158 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:13 crc kubenswrapper[5101]: E0122 09:54:13.926670 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:14.426656561 +0000 UTC m=+126.870286828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.027699 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:14 crc kubenswrapper[5101]: E0122 09:54:14.028034 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:14.528017022 +0000 UTC m=+126.971647289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.043530 5101 ???:1] "http: TLS handshake error from 192.168.126.11:40004: no serving certificate available for the kubelet" Jan 22 09:54:14 crc kubenswrapper[5101]: E0122 09:54:14.130167 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:14.630148814 +0000 UTC m=+127.073779081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.129792 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.186528 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.186578 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.186595 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z79d9"] Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.187987 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.195802 5101 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-x59wv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.195874 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" podUID="bd3171cb-920d-48bd-9653-6cd577a560bd" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.196970 5101 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-ss5t9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.197002 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" podUID="43dfdef8-e150-4eba-b790-6c9a395fba76" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.443049 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p79nv"] Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.443124 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z79d9"] Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.443149 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.443163 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hdkwz"] Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.475721 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.476267 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:14 crc kubenswrapper[5101]: E0122 09:54:14.476980 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:14.976958072 +0000 UTC m=+127.420588349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.654095 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.654252 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-utilities\") pod \"community-operators-p79nv\" (UID: \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\") " pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.654278 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e788d99a-4b7e-4d84-bf22-394fb29a2382-catalog-content\") pod \"certified-operators-z79d9\" (UID: \"e788d99a-4b7e-4d84-bf22-394fb29a2382\") " pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.654354 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e788d99a-4b7e-4d84-bf22-394fb29a2382-utilities\") pod \"certified-operators-z79d9\" (UID: \"e788d99a-4b7e-4d84-bf22-394fb29a2382\") " pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.654381 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-catalog-content\") pod \"community-operators-p79nv\" (UID: \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\") " pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.654497 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q6m5\" (UniqueName: \"kubernetes.io/projected/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-kube-api-access-7q6m5\") pod \"community-operators-p79nv\" (UID: \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\") " pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.654803 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9588\" (UniqueName: \"kubernetes.io/projected/e788d99a-4b7e-4d84-bf22-394fb29a2382-kube-api-access-s9588\") pod \"certified-operators-z79d9\" (UID: \"e788d99a-4b7e-4d84-bf22-394fb29a2382\") " pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:54:14 crc kubenswrapper[5101]: E0122 09:54:14.663899 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.163877173 +0000 UTC m=+127.607507500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.694089 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:14 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:14 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:14 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.694173 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.760916 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:14 crc kubenswrapper[5101]: E0122 09:54:14.761146 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.261108619 +0000 UTC m=+127.704738896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.761476 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.761536 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-utilities\") pod \"community-operators-p79nv\" (UID: \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\") " pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.761557 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e788d99a-4b7e-4d84-bf22-394fb29a2382-catalog-content\") pod \"certified-operators-z79d9\" (UID: \"e788d99a-4b7e-4d84-bf22-394fb29a2382\") " pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.761620 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e788d99a-4b7e-4d84-bf22-394fb29a2382-utilities\") pod \"certified-operators-z79d9\" (UID: \"e788d99a-4b7e-4d84-bf22-394fb29a2382\") " pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.761653 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-catalog-content\") pod \"community-operators-p79nv\" (UID: \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\") " pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.761700 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7q6m5\" (UniqueName: \"kubernetes.io/projected/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-kube-api-access-7q6m5\") pod \"community-operators-p79nv\" (UID: \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\") " pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.761774 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s9588\" (UniqueName: \"kubernetes.io/projected/e788d99a-4b7e-4d84-bf22-394fb29a2382-kube-api-access-s9588\") pod \"certified-operators-z79d9\" (UID: \"e788d99a-4b7e-4d84-bf22-394fb29a2382\") " pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:54:14 crc kubenswrapper[5101]: E0122 09:54:14.761967 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.261950002 +0000 UTC m=+127.705580269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.762557 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-catalog-content\") pod \"community-operators-p79nv\" (UID: \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\") " pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.762687 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e788d99a-4b7e-4d84-bf22-394fb29a2382-utilities\") pod \"certified-operators-z79d9\" (UID: \"e788d99a-4b7e-4d84-bf22-394fb29a2382\") " pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.762979 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-utilities\") pod \"community-operators-p79nv\" (UID: \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\") " pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.763024 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e788d99a-4b7e-4d84-bf22-394fb29a2382-catalog-content\") pod \"certified-operators-z79d9\" (UID: \"e788d99a-4b7e-4d84-bf22-394fb29a2382\") " pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.814865 5101 generic.go:358] "Generic (PLEG): container finished" podID="568dbcc8-3ad6-4b41-acb0-8e4c28973db7" containerID="822d05fe23aa296548da09bf42a1462cbc17dac5804bf931f7140419e6ab7fa4" exitCode=0 Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.863095 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:14 crc kubenswrapper[5101]: E0122 09:54:14.863695 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.363672933 +0000 UTC m=+127.807303210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.918220 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q6m5\" (UniqueName: \"kubernetes.io/projected/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-kube-api-access-7q6m5\") pod \"community-operators-p79nv\" (UID: \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\") " pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.967160 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:14 crc kubenswrapper[5101]: E0122 09:54:14.967737 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.46771905 +0000 UTC m=+127.911349327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:14 crc kubenswrapper[5101]: I0122 09:54:14.974538 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9588\" (UniqueName: \"kubernetes.io/projected/e788d99a-4b7e-4d84-bf22-394fb29a2382-kube-api-access-s9588\") pod \"certified-operators-z79d9\" (UID: \"e788d99a-4b7e-4d84-bf22-394fb29a2382\") " pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:14.976740 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.120042 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4bdgc"] Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.121737 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.123671 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.125765 5101 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-x59wv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.125854 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" podUID="bd3171cb-920d-48bd-9653-6cd577a560bd" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.126035 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:15 crc kubenswrapper[5101]: E0122 09:54:15.126270 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.626243838 +0000 UTC m=+128.069874095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.126362 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:15 crc kubenswrapper[5101]: E0122 09:54:15.126750 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.626742592 +0000 UTC m=+128.070372849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.185578 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.189540 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.204920 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hdkwz"] Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.204974 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" event={"ID":"901b7095-5e60-483e-996c-1d63888331ce","Type":"ContainerStarted","Data":"4124b3122ff8b0659fc77123a5c66617f7c85e9533b4e4221a606b6225bb69be"} Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.205002 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"5f642d6e-f3f5-4551-b1c7-ccf416fe502b","Type":"ContainerStarted","Data":"84ba6194f4beb34a6921f5333a58c40d4d1685f403045982b6767de59d127640"} Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.205016 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bdgc"] Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.205030 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" event={"ID":"568dbcc8-3ad6-4b41-acb0-8e4c28973db7","Type":"ContainerDied","Data":"822d05fe23aa296548da09bf42a1462cbc17dac5804bf931f7140419e6ab7fa4"} Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.230191 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.230389 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:54:15 crc kubenswrapper[5101]: E0122 09:54:15.230678 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.730642544 +0000 UTC m=+128.174272811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.230872 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.230979 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4qlw\" (UniqueName: \"kubernetes.io/projected/fc21b80b-c600-46ec-b79a-8988ef57da90-kube-api-access-w4qlw\") pod \"community-operators-hdkwz\" (UID: \"fc21b80b-c600-46ec-b79a-8988ef57da90\") " pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.231021 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-utilities\") pod \"certified-operators-4bdgc\" (UID: \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\") " pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.231101 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-catalog-content\") pod \"certified-operators-4bdgc\" (UID: \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\") " pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.231241 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfzbs\" (UniqueName: \"kubernetes.io/projected/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-kube-api-access-zfzbs\") pod \"certified-operators-4bdgc\" (UID: \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\") " pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.231271 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.231305 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc21b80b-c600-46ec-b79a-8988ef57da90-utilities\") pod \"community-operators-hdkwz\" (UID: \"fc21b80b-c600-46ec-b79a-8988ef57da90\") " pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.231561 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:54:15 crc kubenswrapper[5101]: E0122 09:54:15.232058 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.732036913 +0000 UTC m=+128.175667180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.233679 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc21b80b-c600-46ec-b79a-8988ef57da90-catalog-content\") pod \"community-operators-hdkwz\" (UID: \"fc21b80b-c600-46ec-b79a-8988ef57da90\") " pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.233948 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.257072 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.257226 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.257321 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.257528 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.264927 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.268374 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.276340 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.299947 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k5s8n"] Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.314474 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.346617 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.348015 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.369520 5101 ???:1] "http: TLS handshake error from 192.168.126.11:51158: no serving certificate available for the kubelet" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.370078 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:15 crc kubenswrapper[5101]: E0122 09:54:15.370176 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.87014623 +0000 UTC m=+128.313776497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.371365 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.371570 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4qlw\" (UniqueName: \"kubernetes.io/projected/fc21b80b-c600-46ec-b79a-8988ef57da90-kube-api-access-w4qlw\") pod \"community-operators-hdkwz\" (UID: \"fc21b80b-c600-46ec-b79a-8988ef57da90\") " pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.371704 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-utilities\") pod \"certified-operators-4bdgc\" (UID: \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\") " pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.371827 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-catalog-content\") pod \"certified-operators-4bdgc\" (UID: \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\") " pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.371995 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zfzbs\" (UniqueName: \"kubernetes.io/projected/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-kube-api-access-zfzbs\") pod \"certified-operators-4bdgc\" (UID: \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\") " pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.372121 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc21b80b-c600-46ec-b79a-8988ef57da90-utilities\") pod \"community-operators-hdkwz\" (UID: \"fc21b80b-c600-46ec-b79a-8988ef57da90\") " pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.372324 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc21b80b-c600-46ec-b79a-8988ef57da90-catalog-content\") pod \"community-operators-hdkwz\" (UID: \"fc21b80b-c600-46ec-b79a-8988ef57da90\") " pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.372991 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc21b80b-c600-46ec-b79a-8988ef57da90-catalog-content\") pod \"community-operators-hdkwz\" (UID: \"fc21b80b-c600-46ec-b79a-8988ef57da90\") " pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:54:15 crc kubenswrapper[5101]: E0122 09:54:15.375629 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.873413222 +0000 UTC m=+128.317043489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.379227 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.380344 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-catalog-content\") pod \"certified-operators-4bdgc\" (UID: \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\") " pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.384133 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc21b80b-c600-46ec-b79a-8988ef57da90-utilities\") pod \"community-operators-hdkwz\" (UID: \"fc21b80b-c600-46ec-b79a-8988ef57da90\") " pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.385113 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-utilities\") pod \"certified-operators-4bdgc\" (UID: \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\") " pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.422993 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" podStartSLOduration=104.422964266 podStartE2EDuration="1m44.422964266s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:15.413618885 +0000 UTC m=+127.857249182" watchObservedRunningTime="2026-01-22 09:54:15.422964266 +0000 UTC m=+127.866594533" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.425876 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.437193 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfzbs\" (UniqueName: \"kubernetes.io/projected/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-kube-api-access-zfzbs\") pod \"certified-operators-4bdgc\" (UID: \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\") " pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.462535 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4qlw\" (UniqueName: \"kubernetes.io/projected/fc21b80b-c600-46ec-b79a-8988ef57da90-kube-api-access-w4qlw\") pod \"community-operators-hdkwz\" (UID: \"fc21b80b-c600-46ec-b79a-8988ef57da90\") " pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.474235 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:15 crc kubenswrapper[5101]: E0122 09:54:15.474575 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:15.974555317 +0000 UTC m=+128.418185594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.477019 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-z4tq2" podStartSLOduration=104.477002285 podStartE2EDuration="1m44.477002285s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:15.474995259 +0000 UTC m=+127.918625526" watchObservedRunningTime="2026-01-22 09:54:15.477002285 +0000 UTC m=+127.920632552" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.576079 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs\") pod \"network-metrics-daemon-2kpwn\" (UID: \"4d9d0a50-8eab-4184-b6dc-38872680242c\") " pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.576149 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:15 crc kubenswrapper[5101]: E0122 09:54:15.576580 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:16.076561656 +0000 UTC m=+128.520191923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.578335 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.585597 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.585991 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5s8n"] Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.589830 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.597295 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.601229 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.625288 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d9d0a50-8eab-4184-b6dc-38872680242c-metrics-certs\") pod \"network-metrics-daemon-2kpwn\" (UID: \"4d9d0a50-8eab-4184-b6dc-38872680242c\") " pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.643154 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lzkjb"] Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.648197 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.643501 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:15 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:15 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:15 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.649374 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.722367 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.722561 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-catalog-content\") pod \"redhat-marketplace-k5s8n\" (UID: \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\") " pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.722608 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-utilities\") pod \"redhat-marketplace-k5s8n\" (UID: \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\") " pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.722649 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgtnf\" (UniqueName: \"kubernetes.io/projected/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-kube-api-access-sgtnf\") pod \"redhat-marketplace-k5s8n\" (UID: \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\") " pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:54:15 crc kubenswrapper[5101]: E0122 09:54:15.722755 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:16.222737939 +0000 UTC m=+128.666368206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.737360 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.739256 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2kpwn" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.744485 5101 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-x59wv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.744833 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" podUID="bd3171cb-920d-48bd-9653-6cd577a560bd" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.778852 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=32.778826726 podStartE2EDuration="32.778826726s" podCreationTimestamp="2026-01-22 09:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:15.737893423 +0000 UTC m=+128.181523690" watchObservedRunningTime="2026-01-22 09:54:15.778826726 +0000 UTC m=+128.222456993" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.824463 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-catalog-content\") pod \"redhat-marketplace-k5s8n\" (UID: \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\") " pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.824554 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.824614 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-utilities\") pod \"redhat-marketplace-k5s8n\" (UID: \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\") " pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.824701 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sgtnf\" (UniqueName: \"kubernetes.io/projected/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-kube-api-access-sgtnf\") pod \"redhat-marketplace-k5s8n\" (UID: \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\") " pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.850323 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-catalog-content\") pod \"redhat-marketplace-k5s8n\" (UID: \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\") " pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:54:15 crc kubenswrapper[5101]: E0122 09:54:15.850695 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:16.350675993 +0000 UTC m=+128.794306270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.851132 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-utilities\") pod \"redhat-marketplace-k5s8n\" (UID: \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\") " pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:54:15 crc kubenswrapper[5101]: I0122 09:54:15.966283 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:16 crc kubenswrapper[5101]: E0122 09:54:15.992173 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:16.492109674 +0000 UTC m=+128.935739941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.094091 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:16 crc kubenswrapper[5101]: E0122 09:54:16.095009 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:16.594981087 +0000 UTC m=+129.038611354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.100112 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzkjb"] Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.100394 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.194731 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgtnf\" (UniqueName: \"kubernetes.io/projected/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-kube-api-access-sgtnf\") pod \"redhat-marketplace-k5s8n\" (UID: \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\") " pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.196135 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:16 crc kubenswrapper[5101]: E0122 09:54:16.198913 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:16.698878079 +0000 UTC m=+129.142508346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.207080 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.207519 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21c1a591-9051-4ab4-883b-c6a2cf1aecff-utilities\") pod \"redhat-marketplace-lzkjb\" (UID: \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\") " pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.207570 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrhv6\" (UniqueName: \"kubernetes.io/projected/21c1a591-9051-4ab4-883b-c6a2cf1aecff-kube-api-access-nrhv6\") pod \"redhat-marketplace-lzkjb\" (UID: \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\") " pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.207620 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21c1a591-9051-4ab4-883b-c6a2cf1aecff-catalog-content\") pod \"redhat-marketplace-lzkjb\" (UID: \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\") " pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:54:16 crc kubenswrapper[5101]: E0122 09:54:16.215871 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:16.715849633 +0000 UTC m=+129.159479900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.231022 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dc6g7"] Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.308382 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.308902 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21c1a591-9051-4ab4-883b-c6a2cf1aecff-utilities\") pod \"redhat-marketplace-lzkjb\" (UID: \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\") " pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.308926 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nrhv6\" (UniqueName: \"kubernetes.io/projected/21c1a591-9051-4ab4-883b-c6a2cf1aecff-kube-api-access-nrhv6\") pod \"redhat-marketplace-lzkjb\" (UID: \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\") " pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.308945 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21c1a591-9051-4ab4-883b-c6a2cf1aecff-catalog-content\") pod \"redhat-marketplace-lzkjb\" (UID: \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\") " pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.309569 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21c1a591-9051-4ab4-883b-c6a2cf1aecff-catalog-content\") pod \"redhat-marketplace-lzkjb\" (UID: \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\") " pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:54:16 crc kubenswrapper[5101]: E0122 09:54:16.309648 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:16.809624042 +0000 UTC m=+129.253254309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.310978 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21c1a591-9051-4ab4-883b-c6a2cf1aecff-utilities\") pod \"redhat-marketplace-lzkjb\" (UID: \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\") " pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.406820 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrhv6\" (UniqueName: \"kubernetes.io/projected/21c1a591-9051-4ab4-883b-c6a2cf1aecff-kube-api-access-nrhv6\") pod \"redhat-marketplace-lzkjb\" (UID: \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\") " pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.414092 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.415196 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:16 crc kubenswrapper[5101]: E0122 09:54:16.415680 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:16.915664624 +0000 UTC m=+129.359294891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.467700 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dc6g7"] Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.467749 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7t7v9"] Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.506740 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7t7v9"] Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.506951 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.507712 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.526340 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:16 crc kubenswrapper[5101]: E0122 09:54:16.526678 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.026649523 +0000 UTC m=+129.470279790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.527025 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:16 crc kubenswrapper[5101]: E0122 09:54:16.527399 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.027382944 +0000 UTC m=+129.471013211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.547899 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.630073 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.630314 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d1ac98b-01eb-4125-837f-28a4429c09c6-catalog-content\") pod \"redhat-operators-dc6g7\" (UID: \"6d1ac98b-01eb-4125-837f-28a4429c09c6\") " pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.630339 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d87939b-ab96-41d5-ad67-0b52de7b0613-utilities\") pod \"redhat-operators-7t7v9\" (UID: \"2d87939b-ab96-41d5-ad67-0b52de7b0613\") " pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.630356 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d1ac98b-01eb-4125-837f-28a4429c09c6-utilities\") pod \"redhat-operators-dc6g7\" (UID: \"6d1ac98b-01eb-4125-837f-28a4429c09c6\") " pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.630389 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d87939b-ab96-41d5-ad67-0b52de7b0613-catalog-content\") pod \"redhat-operators-7t7v9\" (UID: \"2d87939b-ab96-41d5-ad67-0b52de7b0613\") " pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.630412 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8mfj\" (UniqueName: \"kubernetes.io/projected/6d1ac98b-01eb-4125-837f-28a4429c09c6-kube-api-access-c8mfj\") pod \"redhat-operators-dc6g7\" (UID: \"6d1ac98b-01eb-4125-837f-28a4429c09c6\") " pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.630507 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqt9r\" (UniqueName: \"kubernetes.io/projected/2d87939b-ab96-41d5-ad67-0b52de7b0613-kube-api-access-hqt9r\") pod \"redhat-operators-7t7v9\" (UID: \"2d87939b-ab96-41d5-ad67-0b52de7b0613\") " pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:54:16 crc kubenswrapper[5101]: E0122 09:54:16.630636 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.130618157 +0000 UTC m=+129.574248424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.654574 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:16 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:16 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:16 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.654631 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.683339 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.737936 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d1ac98b-01eb-4125-837f-28a4429c09c6-catalog-content\") pod \"redhat-operators-dc6g7\" (UID: \"6d1ac98b-01eb-4125-837f-28a4429c09c6\") " pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.737980 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d87939b-ab96-41d5-ad67-0b52de7b0613-utilities\") pod \"redhat-operators-7t7v9\" (UID: \"2d87939b-ab96-41d5-ad67-0b52de7b0613\") " pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.738002 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d1ac98b-01eb-4125-837f-28a4429c09c6-utilities\") pod \"redhat-operators-dc6g7\" (UID: \"6d1ac98b-01eb-4125-837f-28a4429c09c6\") " pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.738033 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d87939b-ab96-41d5-ad67-0b52de7b0613-catalog-content\") pod \"redhat-operators-7t7v9\" (UID: \"2d87939b-ab96-41d5-ad67-0b52de7b0613\") " pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.738054 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.738078 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c8mfj\" (UniqueName: \"kubernetes.io/projected/6d1ac98b-01eb-4125-837f-28a4429c09c6-kube-api-access-c8mfj\") pod \"redhat-operators-dc6g7\" (UID: \"6d1ac98b-01eb-4125-837f-28a4429c09c6\") " pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.738144 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hqt9r\" (UniqueName: \"kubernetes.io/projected/2d87939b-ab96-41d5-ad67-0b52de7b0613-kube-api-access-hqt9r\") pod \"redhat-operators-7t7v9\" (UID: \"2d87939b-ab96-41d5-ad67-0b52de7b0613\") " pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.739050 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d1ac98b-01eb-4125-837f-28a4429c09c6-catalog-content\") pod \"redhat-operators-dc6g7\" (UID: \"6d1ac98b-01eb-4125-837f-28a4429c09c6\") " pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.739934 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d87939b-ab96-41d5-ad67-0b52de7b0613-utilities\") pod \"redhat-operators-7t7v9\" (UID: \"2d87939b-ab96-41d5-ad67-0b52de7b0613\") " pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.740167 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d1ac98b-01eb-4125-837f-28a4429c09c6-utilities\") pod \"redhat-operators-dc6g7\" (UID: \"6d1ac98b-01eb-4125-837f-28a4429c09c6\") " pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.740386 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d87939b-ab96-41d5-ad67-0b52de7b0613-catalog-content\") pod \"redhat-operators-7t7v9\" (UID: \"2d87939b-ab96-41d5-ad67-0b52de7b0613\") " pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:54:16 crc kubenswrapper[5101]: E0122 09:54:16.740637 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.24062651 +0000 UTC m=+129.684256777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.841720 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:16 crc kubenswrapper[5101]: E0122 09:54:16.842131 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.342111115 +0000 UTC m=+129.785741382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.945775 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:16 crc kubenswrapper[5101]: E0122 09:54:16.946623 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.446604304 +0000 UTC m=+129.890234571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.955653 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8mfj\" (UniqueName: \"kubernetes.io/projected/6d1ac98b-01eb-4125-837f-28a4429c09c6-kube-api-access-c8mfj\") pod \"redhat-operators-dc6g7\" (UID: \"6d1ac98b-01eb-4125-837f-28a4429c09c6\") " pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:54:16 crc kubenswrapper[5101]: I0122 09:54:16.987858 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqt9r\" (UniqueName: \"kubernetes.io/projected/2d87939b-ab96-41d5-ad67-0b52de7b0613-kube-api-access-hqt9r\") pod \"redhat-operators-7t7v9\" (UID: \"2d87939b-ab96-41d5-ad67-0b52de7b0613\") " pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.047314 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.047673 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.547653976 +0000 UTC m=+129.991284243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.048011 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.048508 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.54849881 +0000 UTC m=+129.992129067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.111850 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.131534 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"5f642d6e-f3f5-4551-b1c7-ccf416fe502b","Type":"ContainerStarted","Data":"55dcdaab18f4f902d89e937c53a2b7d304218c5c06171cc263735c4d15dbb2e8"} Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.141055 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.150210 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.150329 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.650306003 +0000 UTC m=+130.093936260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.150769 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.151261 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.65124587 +0000 UTC m=+130.094876137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.191605 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.193359 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.198160 5101 ???:1] "http: TLS handshake error from 192.168.126.11:51174: no serving certificate available for the kubelet" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.206975 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=7.206953826 podStartE2EDuration="7.206953826s" podCreationTimestamp="2026-01-22 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:17.206251606 +0000 UTC m=+129.649881873" watchObservedRunningTime="2026-01-22 09:54:17.206953826 +0000 UTC m=+129.650584103" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.253223 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.254241 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.754223846 +0000 UTC m=+130.197854113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.356111 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.357145 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.85712973 +0000 UTC m=+130.300759997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.443515 5101 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.458734 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.459016 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.958967275 +0000 UTC m=+130.402597542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.459413 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.459913 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:17.959900891 +0000 UTC m=+130.403531158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.493997 5101 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.506241 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.548720 5101 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.548915 5101 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" podUID="a6a20a61-7a61-4f52-b57c-c289c661f268" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.581762 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-config-volume\") pod \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\" (UID: \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\") " Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.582056 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh4lw\" (UniqueName: \"kubernetes.io/projected/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-kube-api-access-fh4lw\") pod \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\" (UID: \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\") " Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.582233 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.582387 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-secret-volume\") pod \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\" (UID: \"568dbcc8-3ad6-4b41-acb0-8e4c28973db7\") " Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.583334 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z79d9"] Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.594247 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:18.094216663 +0000 UTC m=+130.537846930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.595315 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-config-volume" (OuterVolumeSpecName: "config-volume") pod "568dbcc8-3ad6-4b41-acb0-8e4c28973db7" (UID: "568dbcc8-3ad6-4b41-acb0-8e4c28973db7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.670928 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:17 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:17 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:17 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.671751 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.685567 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.685637 5101 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.685904 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:18.185890053 +0000 UTC m=+130.629520320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.697796 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-kube-api-access-fh4lw" (OuterVolumeSpecName: "kube-api-access-fh4lw") pod "568dbcc8-3ad6-4b41-acb0-8e4c28973db7" (UID: "568dbcc8-3ad6-4b41-acb0-8e4c28973db7"). InnerVolumeSpecName "kube-api-access-fh4lw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.706665 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "568dbcc8-3ad6-4b41-acb0-8e4c28973db7" (UID: "568dbcc8-3ad6-4b41-acb0-8e4c28973db7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.736815 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p79nv"] Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.787269 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.787536 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fh4lw\" (UniqueName: \"kubernetes.io/projected/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-kube-api-access-fh4lw\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.787553 5101 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/568dbcc8-3ad6-4b41-acb0-8e4c28973db7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.787629 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:18.287608195 +0000 UTC m=+130.731238462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.890373 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.890854 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:18.390836598 +0000 UTC m=+130.834466865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:17 crc kubenswrapper[5101]: I0122 09:54:17.992463 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:17 crc kubenswrapper[5101]: E0122 09:54:17.993128 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:18.493095344 +0000 UTC m=+130.936725611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.040723 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2kpwn"] Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.062246 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bdgc"] Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.095755 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:18 crc kubenswrapper[5101]: E0122 09:54:18.096608 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:18.596587025 +0000 UTC m=+131.040217292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.135115 5101 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-x59wv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.135218 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" podUID="bd3171cb-920d-48bd-9653-6cd577a560bd" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.182026 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-4lvr8" Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.200188 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzkjb"] Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.201796 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:18 crc kubenswrapper[5101]: E0122 09:54:18.202576 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:18.702553695 +0000 UTC m=+131.146183962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.283432 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" event={"ID":"568dbcc8-3ad6-4b41-acb0-8e4c28973db7","Type":"ContainerDied","Data":"653dcf9d3c9eb88d5fe026102d8964b7bd332e7382d547798a15636f9333ca41"} Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.283833 5101 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="653dcf9d3c9eb88d5fe026102d8964b7bd332e7382d547798a15636f9333ca41" Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.283458 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-945sr" Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.288811 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5s8n"] Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.305631 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:18 crc kubenswrapper[5101]: E0122 09:54:18.320302 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:18.820271233 +0000 UTC m=+131.263901500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.320521 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"8a7e965bbc056e15b3319c16e2fd520bc862ccb4a473cc2064760e88c3a68c6d"} Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.407196 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:18 crc kubenswrapper[5101]: E0122 09:54:18.407504 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:18.907463839 +0000 UTC m=+131.351094106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.407886 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:18 crc kubenswrapper[5101]: E0122 09:54:18.409205 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:18.909196017 +0000 UTC m=+131.352826284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.432748 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hdkwz"] Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.433757 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7t7v9"] Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.508769 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:18 crc kubenswrapper[5101]: E0122 09:54:18.509163 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:19.009111368 +0000 UTC m=+131.452741635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.612027 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:18 crc kubenswrapper[5101]: E0122 09:54:18.612902 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:19.112884556 +0000 UTC m=+131.556514823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:18 crc kubenswrapper[5101]: W0122 09:54:18.649289 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d87939b_ab96_41d5_ad67_0b52de7b0613.slice/crio-82043d34100b1e6b8f26cb0a84e73b283bcd01e5137f4520c2fe816d44314648 WatchSource:0}: Error finding container 82043d34100b1e6b8f26cb0a84e73b283bcd01e5137f4520c2fe816d44314648: Status 404 returned error can't find the container with id 82043d34100b1e6b8f26cb0a84e73b283bcd01e5137f4520c2fe816d44314648 Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.658569 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:18 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:18 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:18 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.658637 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.713260 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:18 crc kubenswrapper[5101]: E0122 09:54:18.713516 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:19.213497017 +0000 UTC m=+131.657127284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.765770 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dc6g7"] Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.816665 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:18 crc kubenswrapper[5101]: E0122 09:54:18.817565 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:19.317529222 +0000 UTC m=+131.761159499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:18 crc kubenswrapper[5101]: W0122 09:54:18.844857 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d1ac98b_01eb_4125_837f_28a4429c09c6.slice/crio-4fa00901525c3e04a548966ca7682d06a98452eb9275ec0414b0a638e5173ed8 WatchSource:0}: Error finding container 4fa00901525c3e04a548966ca7682d06a98452eb9275ec0414b0a638e5173ed8: Status 404 returned error can't find the container with id 4fa00901525c3e04a548966ca7682d06a98452eb9275ec0414b0a638e5173ed8 Jan 22 09:54:18 crc kubenswrapper[5101]: I0122 09:54:18.918367 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:18 crc kubenswrapper[5101]: E0122 09:54:18.918950 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:19.418925015 +0000 UTC m=+131.862555282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.021813 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:19 crc kubenswrapper[5101]: E0122 09:54:19.022604 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:19.52258921 +0000 UTC m=+131.966219477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.123384 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:19 crc kubenswrapper[5101]: E0122 09:54:19.123907 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:19.623866009 +0000 UTC m=+132.067496276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.227624 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.227815 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:19 crc kubenswrapper[5101]: E0122 09:54:19.228166 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:19.728148582 +0000 UTC m=+132.171778849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.252840 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bbf9g" Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.329627 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:19 crc kubenswrapper[5101]: E0122 09:54:19.330945 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:19.830920483 +0000 UTC m=+132.274550740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.356977 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"469f3d89d5d965b2dc133e4d59c547307d52580b973275f6463b66cb47238859"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.367603 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p79nv" event={"ID":"7e8d5b04-69ec-44a1-adfe-7dfc917e4530","Type":"ContainerStarted","Data":"5200b5ec6f299642fe8d20435619bf53eebef9d2b1133952c8eb666800477ad7"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.371414 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdgc" event={"ID":"de72e1f1-e5ac-4a87-9b4b-aa2c16527255","Type":"ContainerStarted","Data":"e6d088bd4edcd9bb3121b7d5fa58b67efca2f7921089c3017c7ea71d386091d1"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.378477 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2kpwn" event={"ID":"4d9d0a50-8eab-4184-b6dc-38872680242c","Type":"ContainerStarted","Data":"b54d44ee5d8eac29f17d3799799506e963af4e96b5f5cef7036408fd42970634"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.381168 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzkjb" event={"ID":"21c1a591-9051-4ab4-883b-c6a2cf1aecff","Type":"ContainerStarted","Data":"c205c8322925fa51094200f4d041e2c5cc7e1ce05d6e8a591fee5f1f9c639f75"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.382211 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5s8n" event={"ID":"0fa3648a-30f1-4fba-8830-a4c93ff9a88b","Type":"ContainerStarted","Data":"f9c55c10ad740e5b34ff14f482418309dd44919a1b30849e6341c2b71c4c3a84"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.399200 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"135e2a8e2bdc4e5881aeb5687f58457db08abbdcd8202aa95f9c8144915f0d80"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.405393 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" event={"ID":"932ff910-1ca7-4354-a306-1ce5f15f4f92","Type":"ContainerStarted","Data":"dfa5399b7c5e20c29de070abaa9ddbed6a3636ae658697b355f6a28db1bf2e14"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.406769 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc6g7" event={"ID":"6d1ac98b-01eb-4125-837f-28a4429c09c6","Type":"ContainerStarted","Data":"4fa00901525c3e04a548966ca7682d06a98452eb9275ec0414b0a638e5173ed8"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.419910 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdkwz" event={"ID":"fc21b80b-c600-46ec-b79a-8988ef57da90","Type":"ContainerStarted","Data":"7277c821d4f7c031cc057356780994dcb36a8c8c01a886c04cf089b492c6725e"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.425276 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t7v9" event={"ID":"2d87939b-ab96-41d5-ad67-0b52de7b0613","Type":"ContainerStarted","Data":"82043d34100b1e6b8f26cb0a84e73b283bcd01e5137f4520c2fe816d44314648"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.435546 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:19 crc kubenswrapper[5101]: E0122 09:54:19.436283 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:19.936265835 +0000 UTC m=+132.379896102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.537400 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:19 crc kubenswrapper[5101]: E0122 09:54:19.538372 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:20.038329786 +0000 UTC m=+132.481960053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.538461 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:19 crc kubenswrapper[5101]: E0122 09:54:19.539096 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:20.039063057 +0000 UTC m=+132.482693324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.587686 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"6ae2daef0c1ae4b0b331d1e80d6ecefad0f011424523f9c7e6a85429239762da"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.596941 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z79d9" event={"ID":"e788d99a-4b7e-4d84-bf22-394fb29a2382","Type":"ContainerStarted","Data":"5fb0304b8c02221eae5486dc8de3d0a4f14b6636b7c78d834878d25f17c04cff"} Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.639815 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:19 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:19 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:19 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.639899 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.641067 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:19 crc kubenswrapper[5101]: E0122 09:54:19.641963 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:20.141911409 +0000 UTC m=+132.585541686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.735234 5101 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-x59wv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.735336 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" podUID="bd3171cb-920d-48bd-9653-6cd577a560bd" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.743822 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:19 crc kubenswrapper[5101]: E0122 09:54:19.744822 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:20.244797053 +0000 UTC m=+132.688427320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.846282 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:19 crc kubenswrapper[5101]: E0122 09:54:19.846749 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:20.34670386 +0000 UTC m=+132.790334217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.964078 5101 ???:1] "http: TLS handshake error from 192.168.126.11:51180: no serving certificate available for the kubelet" Jan 22 09:54:19 crc kubenswrapper[5101]: I0122 09:54:19.964897 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:19 crc kubenswrapper[5101]: E0122 09:54:19.965481 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:20.465460467 +0000 UTC m=+132.909090734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.066127 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:20 crc kubenswrapper[5101]: E0122 09:54:20.066652 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:20.566618641 +0000 UTC m=+133.010248908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.121024 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.121106 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.189604 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:20 crc kubenswrapper[5101]: E0122 09:54:20.190048 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:20.690018098 +0000 UTC m=+133.133648355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.292490 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:20 crc kubenswrapper[5101]: E0122 09:54:20.293106 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:20.793051738 +0000 UTC m=+133.236682005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.463515 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:20 crc kubenswrapper[5101]: E0122 09:54:20.463847 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:20.963834779 +0000 UTC m=+133.407465046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.564506 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:20 crc kubenswrapper[5101]: E0122 09:54:20.565161 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:21.065135959 +0000 UTC m=+133.508766226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.621111 5101 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-w5j22 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 22 09:54:20 crc kubenswrapper[5101]: [+]log ok Jan 22 09:54:20 crc kubenswrapper[5101]: [+]etcd ok Jan 22 09:54:20 crc kubenswrapper[5101]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 22 09:54:20 crc kubenswrapper[5101]: [+]poststarthook/generic-apiserver-start-informers ok Jan 22 09:54:20 crc kubenswrapper[5101]: [+]poststarthook/max-in-flight-filter ok Jan 22 09:54:20 crc kubenswrapper[5101]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 22 09:54:20 crc kubenswrapper[5101]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 22 09:54:20 crc kubenswrapper[5101]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 22 09:54:20 crc kubenswrapper[5101]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 22 09:54:20 crc kubenswrapper[5101]: [+]poststarthook/project.openshift.io-projectcache ok Jan 22 09:54:20 crc kubenswrapper[5101]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 22 09:54:20 crc kubenswrapper[5101]: [+]poststarthook/openshift.io-startinformers ok Jan 22 09:54:20 crc kubenswrapper[5101]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 22 09:54:20 crc kubenswrapper[5101]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 22 09:54:20 crc kubenswrapper[5101]: livez check failed Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.623516 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" podUID="112f7c63-b876-4377-8418-18d8abc92100" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.640515 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:20 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:20 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:20 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.640614 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.667534 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:20 crc kubenswrapper[5101]: E0122 09:54:20.668126 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:21.168105788 +0000 UTC m=+133.611736055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.677275 5101 generic.go:358] "Generic (PLEG): container finished" podID="5f642d6e-f3f5-4551-b1c7-ccf416fe502b" containerID="55dcdaab18f4f902d89e937c53a2b7d304218c5c06171cc263735c4d15dbb2e8" exitCode=0 Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.677607 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"5f642d6e-f3f5-4551-b1c7-ccf416fe502b","Type":"ContainerDied","Data":"55dcdaab18f4f902d89e937c53a2b7d304218c5c06171cc263735c4d15dbb2e8"} Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.723721 5101 generic.go:358] "Generic (PLEG): container finished" podID="e788d99a-4b7e-4d84-bf22-394fb29a2382" containerID="e3d3d41d1ae640acc6ebd63537d727b397638f3690b04a59938eebc62d8443c4" exitCode=0 Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.723908 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z79d9" event={"ID":"e788d99a-4b7e-4d84-bf22-394fb29a2382","Type":"ContainerDied","Data":"e3d3d41d1ae640acc6ebd63537d727b397638f3690b04a59938eebc62d8443c4"} Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.750123 5101 generic.go:358] "Generic (PLEG): container finished" podID="7e8d5b04-69ec-44a1-adfe-7dfc917e4530" containerID="84ac8207e181009e7cd61d8c2057eb96d4548042c965feb0381c173900af7482" exitCode=0 Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.750318 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p79nv" event={"ID":"7e8d5b04-69ec-44a1-adfe-7dfc917e4530","Type":"ContainerDied","Data":"84ac8207e181009e7cd61d8c2057eb96d4548042c965feb0381c173900af7482"} Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.764867 5101 generic.go:358] "Generic (PLEG): container finished" podID="de72e1f1-e5ac-4a87-9b4b-aa2c16527255" containerID="b543889f92a8c81a9153f94ad1f04c8cc1a550e9246007f97c057ecd17a92cdf" exitCode=0 Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.764986 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdgc" event={"ID":"de72e1f1-e5ac-4a87-9b4b-aa2c16527255","Type":"ContainerDied","Data":"b543889f92a8c81a9153f94ad1f04c8cc1a550e9246007f97c057ecd17a92cdf"} Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.768977 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:20 crc kubenswrapper[5101]: E0122 09:54:20.769379 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:21.269348237 +0000 UTC m=+133.712978504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.769581 5101 generic.go:358] "Generic (PLEG): container finished" podID="21c1a591-9051-4ab4-883b-c6a2cf1aecff" containerID="0ca74f2740090f5fa5f8275a8863a98b886ca2e94f58307fa6eb5427f71441f1" exitCode=0 Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.770833 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzkjb" event={"ID":"21c1a591-9051-4ab4-883b-c6a2cf1aecff","Type":"ContainerDied","Data":"0ca74f2740090f5fa5f8275a8863a98b886ca2e94f58307fa6eb5427f71441f1"} Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.790491 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"7978463e1015712424dbb2dad20488a500d307c274b9ace97bfc1d7e5ef575d4"} Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.871023 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:20 crc kubenswrapper[5101]: E0122 09:54:20.877625 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:21.377591277 +0000 UTC m=+133.821221544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.972237 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:20 crc kubenswrapper[5101]: E0122 09:54:20.972717 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:21.472666079 +0000 UTC m=+133.916296496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.973860 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:20 crc kubenswrapper[5101]: E0122 09:54:20.974602 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:21.474589534 +0000 UTC m=+133.918219801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.988241 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.989296 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="568dbcc8-3ad6-4b41-acb0-8e4c28973db7" containerName="collect-profiles" Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.989333 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="568dbcc8-3ad6-4b41-acb0-8e4c28973db7" containerName="collect-profiles" Jan 22 09:54:20 crc kubenswrapper[5101]: I0122 09:54:20.989475 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="568dbcc8-3ad6-4b41-acb0-8e4c28973db7" containerName="collect-profiles" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.026588 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.026806 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.029348 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.032394 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.077041 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.077257 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c406081-12e2-4b84-96e1-8c79712fcda4-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"7c406081-12e2-4b84-96e1-8c79712fcda4\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.077378 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c406081-12e2-4b84-96e1-8c79712fcda4-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"7c406081-12e2-4b84-96e1-8c79712fcda4\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 09:54:21 crc kubenswrapper[5101]: E0122 09:54:21.077546 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:21.577522362 +0000 UTC m=+134.021152629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.133652 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-x59wv" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.182466 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.182576 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c406081-12e2-4b84-96e1-8c79712fcda4-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"7c406081-12e2-4b84-96e1-8c79712fcda4\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.182659 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c406081-12e2-4b84-96e1-8c79712fcda4-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"7c406081-12e2-4b84-96e1-8c79712fcda4\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.182773 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c406081-12e2-4b84-96e1-8c79712fcda4-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"7c406081-12e2-4b84-96e1-8c79712fcda4\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 09:54:21 crc kubenswrapper[5101]: E0122 09:54:21.183117 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:21.683096025 +0000 UTC m=+134.126726302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.232931 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c406081-12e2-4b84-96e1-8c79712fcda4-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"7c406081-12e2-4b84-96e1-8c79712fcda4\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.285336 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:21 crc kubenswrapper[5101]: E0122 09:54:21.285652 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:21.785622631 +0000 UTC m=+134.229252898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.385720 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.387319 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:21 crc kubenswrapper[5101]: E0122 09:54:21.387748 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:21.887729655 +0000 UTC m=+134.331359922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.489230 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:21 crc kubenswrapper[5101]: E0122 09:54:21.489609 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:21.989589842 +0000 UTC m=+134.433220109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.520173 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-79dz2" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.522954 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-kxcn8" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.573813 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.591054 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:21 crc kubenswrapper[5101]: E0122 09:54:21.593027 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.093010333 +0000 UTC m=+134.536640590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.643817 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:21 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:21 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:21 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.643941 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.692272 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:21 crc kubenswrapper[5101]: E0122 09:54:21.692525 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.192484652 +0000 UTC m=+134.636114929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.692998 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:21 crc kubenswrapper[5101]: E0122 09:54:21.694272 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.194254332 +0000 UTC m=+134.637884669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.794984 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:21 crc kubenswrapper[5101]: E0122 09:54:21.795369 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.295331397 +0000 UTC m=+134.738961664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.815838 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2kpwn" event={"ID":"4d9d0a50-8eab-4184-b6dc-38872680242c","Type":"ContainerStarted","Data":"68965fb65533b2c10be84ffa60bab661c09c732ab02a08427efa31d6cfe5744a"} Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.817541 5101 generic.go:358] "Generic (PLEG): container finished" podID="0fa3648a-30f1-4fba-8830-a4c93ff9a88b" containerID="1919cc6ac56f52a061739d95ca7fd02a7d12bcb4fc765277c0593c44bba53584" exitCode=0 Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.817666 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5s8n" event={"ID":"0fa3648a-30f1-4fba-8830-a4c93ff9a88b","Type":"ContainerDied","Data":"1919cc6ac56f52a061739d95ca7fd02a7d12bcb4fc765277c0593c44bba53584"} Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.824871 5101 generic.go:358] "Generic (PLEG): container finished" podID="6d1ac98b-01eb-4125-837f-28a4429c09c6" containerID="3f2e04d23d98eec4c3dceb064094d1ec26f4492bfb5dbdb44cb959b7688a981b" exitCode=0 Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.826241 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc6g7" event={"ID":"6d1ac98b-01eb-4125-837f-28a4429c09c6","Type":"ContainerDied","Data":"3f2e04d23d98eec4c3dceb064094d1ec26f4492bfb5dbdb44cb959b7688a981b"} Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.866457 5101 generic.go:358] "Generic (PLEG): container finished" podID="fc21b80b-c600-46ec-b79a-8988ef57da90" containerID="55dee6045d8b53613a35e4d1754e6f1e1d33e396d6c37938c0d2348adbb75e4c" exitCode=0 Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.866580 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdkwz" event={"ID":"fc21b80b-c600-46ec-b79a-8988ef57da90","Type":"ContainerDied","Data":"55dee6045d8b53613a35e4d1754e6f1e1d33e396d6c37938c0d2348adbb75e4c"} Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.877206 5101 generic.go:358] "Generic (PLEG): container finished" podID="2d87939b-ab96-41d5-ad67-0b52de7b0613" containerID="4f3123342ea3574ac817957d74d666ea9b4f49e1e48234fd351f74d5f18e68b9" exitCode=0 Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.877357 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t7v9" event={"ID":"2d87939b-ab96-41d5-ad67-0b52de7b0613","Type":"ContainerDied","Data":"4f3123342ea3574ac817957d74d666ea9b4f49e1e48234fd351f74d5f18e68b9"} Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.890686 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"727dcd7f35d795b70877ea512cba1cc3b2309d2b92536d4c5fdba114b47e6504"} Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.890753 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:54:21 crc kubenswrapper[5101]: I0122 09:54:21.903387 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:21 crc kubenswrapper[5101]: E0122 09:54:21.923226 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.423193728 +0000 UTC m=+134.866823995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.022366 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.024181 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.524153469 +0000 UTC m=+134.967783736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.112468 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.112539 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.130713 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.131222 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.631204705 +0000 UTC m=+135.074834972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.208610 5101 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-w5j22 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 22 09:54:22 crc kubenswrapper[5101]: [+]log ok Jan 22 09:54:22 crc kubenswrapper[5101]: [+]etcd ok Jan 22 09:54:22 crc kubenswrapper[5101]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 22 09:54:22 crc kubenswrapper[5101]: [+]poststarthook/generic-apiserver-start-informers ok Jan 22 09:54:22 crc kubenswrapper[5101]: [+]poststarthook/max-in-flight-filter ok Jan 22 09:54:22 crc kubenswrapper[5101]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 22 09:54:22 crc kubenswrapper[5101]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 22 09:54:22 crc kubenswrapper[5101]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 22 09:54:22 crc kubenswrapper[5101]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 22 09:54:22 crc kubenswrapper[5101]: [+]poststarthook/project.openshift.io-projectcache ok Jan 22 09:54:22 crc kubenswrapper[5101]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 22 09:54:22 crc kubenswrapper[5101]: [+]poststarthook/openshift.io-startinformers ok Jan 22 09:54:22 crc kubenswrapper[5101]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 22 09:54:22 crc kubenswrapper[5101]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 22 09:54:22 crc kubenswrapper[5101]: livez check failed Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.208716 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" podUID="112f7c63-b876-4377-8418-18d8abc92100" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.233471 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.233671 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.733636048 +0000 UTC m=+135.177266315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.233919 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.235607 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.735592894 +0000 UTC m=+135.179223201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.336667 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.337077 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.837045659 +0000 UTC m=+135.280675946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.344848 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.345386 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.845359237 +0000 UTC m=+135.288989494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.401107 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 09:54:22 crc kubenswrapper[5101]: W0122 09:54:22.420066 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7c406081_12e2_4b84_96e1_8c79712fcda4.slice/crio-ccfb183121a2ee6d7e6534202c0b9739b1976464b7f6157a2245a4c5857cdb49 WatchSource:0}: Error finding container ccfb183121a2ee6d7e6534202c0b9739b1976464b7f6157a2245a4c5857cdb49: Status 404 returned error can't find the container with id ccfb183121a2ee6d7e6534202c0b9739b1976464b7f6157a2245a4c5857cdb49 Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.447932 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.448081 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.948055818 +0000 UTC m=+135.391686085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.448491 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.448879 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:22.948868822 +0000 UTC m=+135.392499089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.454729 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.550374 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f642d6e-f3f5-4551-b1c7-ccf416fe502b-kube-api-access\") pod \"5f642d6e-f3f5-4551-b1c7-ccf416fe502b\" (UID: \"5f642d6e-f3f5-4551-b1c7-ccf416fe502b\") " Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.550564 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f642d6e-f3f5-4551-b1c7-ccf416fe502b-kubelet-dir\") pod \"5f642d6e-f3f5-4551-b1c7-ccf416fe502b\" (UID: \"5f642d6e-f3f5-4551-b1c7-ccf416fe502b\") " Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.550707 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.551020 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f642d6e-f3f5-4551-b1c7-ccf416fe502b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5f642d6e-f3f5-4551-b1c7-ccf416fe502b" (UID: "5f642d6e-f3f5-4551-b1c7-ccf416fe502b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.551054 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:23.051034287 +0000 UTC m=+135.494664554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.563648 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f642d6e-f3f5-4551-b1c7-ccf416fe502b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5f642d6e-f3f5-4551-b1c7-ccf416fe502b" (UID: "5f642d6e-f3f5-4551-b1c7-ccf416fe502b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.677935 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.678324 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f642d6e-f3f5-4551-b1c7-ccf416fe502b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.678340 5101 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f642d6e-f3f5-4551-b1c7-ccf416fe502b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.679100 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:23.179076694 +0000 UTC m=+135.622706971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.679430 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:22 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:22 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:22 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.679491 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.779268 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.779450 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:23.279401367 +0000 UTC m=+135.723031634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.779720 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.780094 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:23.280080546 +0000 UTC m=+135.723710803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.884594 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.885177 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:23.385150605 +0000 UTC m=+135.828780882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.935255 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2kpwn" event={"ID":"4d9d0a50-8eab-4184-b6dc-38872680242c","Type":"ContainerStarted","Data":"86f16b6eeed583f14061a356974bd1a4faeeef767abea15f0c6fb29c6e97f1e5"} Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.941361 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"5f642d6e-f3f5-4551-b1c7-ccf416fe502b","Type":"ContainerDied","Data":"84ba6194f4beb34a6921f5333a58c40d4d1685f403045982b6767de59d127640"} Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.941567 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.941766 5101 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84ba6194f4beb34a6921f5333a58c40d4d1685f403045982b6767de59d127640" Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.944596 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"7c406081-12e2-4b84-96e1-8c79712fcda4","Type":"ContainerStarted","Data":"ccfb183121a2ee6d7e6534202c0b9739b1976464b7f6157a2245a4c5857cdb49"} Jan 22 09:54:22 crc kubenswrapper[5101]: I0122 09:54:22.986427 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:22 crc kubenswrapper[5101]: E0122 09:54:22.986959 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:23.48694308 +0000 UTC m=+135.930573347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.088026 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.088233 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:23.588201919 +0000 UTC m=+136.031832186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.088491 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.089049 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:23.589032843 +0000 UTC m=+136.032663120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.190668 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.191044 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:23.691014453 +0000 UTC m=+136.134644720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.329823 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.330376 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:23.830353933 +0000 UTC m=+136.273984200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.431402 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.431679 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:23.931628523 +0000 UTC m=+136.375258790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.432070 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.433128 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:23.933114266 +0000 UTC m=+136.376744543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.461280 5101 patch_prober.go:28] interesting pod/console-64d44f6ddf-hwdqt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.461383 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-hwdqt" podUID="1bf878a0-4591-4ee2-96e9-db36fe28422d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.534240 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.534506 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.034468088 +0000 UTC m=+136.478098355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.534928 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.535311 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.035295892 +0000 UTC m=+136.478926159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.630209 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:23 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:23 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:23 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.630294 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.636052 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.636222 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.13618819 +0000 UTC m=+136.579818467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.637742 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.638146 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.138134126 +0000 UTC m=+136.581764393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.739986 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.740185 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.240157667 +0000 UTC m=+136.683787944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.740342 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.740907 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.240895788 +0000 UTC m=+136.684526055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.842350 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.842517 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.342491778 +0000 UTC m=+136.786122045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.842602 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.842996 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.342988262 +0000 UTC m=+136.786618529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.969635 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.969813 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.469768323 +0000 UTC m=+136.913398590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.970981 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:23 crc kubenswrapper[5101]: E0122 09:54:23.971593 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.471574364 +0000 UTC m=+136.915204631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:23 crc kubenswrapper[5101]: I0122 09:54:23.986707 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"7c406081-12e2-4b84-96e1-8c79712fcda4","Type":"ContainerStarted","Data":"7d853fbefb1c465e91b29b32e22f52cc94bbbadc2626739ac5f071c252bcada0"} Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.016994 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=4.016978084 podStartE2EDuration="4.016978084s" podCreationTimestamp="2026-01-22 09:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:24.015446471 +0000 UTC m=+136.459076758" watchObservedRunningTime="2026-01-22 09:54:24.016978084 +0000 UTC m=+136.460608351" Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.019727 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2kpwn" podStartSLOduration=113.019712313 podStartE2EDuration="1m53.019712313s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:22.960678708 +0000 UTC m=+135.404308975" watchObservedRunningTime="2026-01-22 09:54:24.019712313 +0000 UTC m=+136.463342580" Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.074160 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:24 crc kubenswrapper[5101]: E0122 09:54:24.076231 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.576207421 +0000 UTC m=+137.019837688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.185578 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:24 crc kubenswrapper[5101]: E0122 09:54:24.186060 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.686027205 +0000 UTC m=+137.129657472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.196148 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.227992 5101 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.287603 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:24 crc kubenswrapper[5101]: E0122 09:54:24.289403 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.789370674 +0000 UTC m=+137.233000941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.432254 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:24 crc kubenswrapper[5101]: E0122 09:54:24.433220 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:24.933196833 +0000 UTC m=+137.376827100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.534501 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:24 crc kubenswrapper[5101]: E0122 09:54:24.537765 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:25.037735966 +0000 UTC m=+137.481366233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.647167 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:24 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:24 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:24 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.647290 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.652456 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:24 crc kubenswrapper[5101]: E0122 09:54:24.653125 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:25.15310864 +0000 UTC m=+137.596738907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.755540 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:24 crc kubenswrapper[5101]: E0122 09:54:24.755800 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:25.25576116 +0000 UTC m=+137.699391427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.757399 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:24 crc kubenswrapper[5101]: E0122 09:54:24.758400 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:25.258341083 +0000 UTC m=+137.701971520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.859458 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:24 crc kubenswrapper[5101]: E0122 09:54:24.859698 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:25.359662165 +0000 UTC m=+137.803292432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.859899 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:24 crc kubenswrapper[5101]: E0122 09:54:24.861441 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:25.361408265 +0000 UTC m=+137.805038532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.960964 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:24 crc kubenswrapper[5101]: E0122 09:54:24.961163 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:25.461132941 +0000 UTC m=+137.904763208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:24 crc kubenswrapper[5101]: I0122 09:54:24.961694 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:24 crc kubenswrapper[5101]: E0122 09:54:24.962073 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:25.462061947 +0000 UTC m=+137.905692224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.063182 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:25 crc kubenswrapper[5101]: E0122 09:54:25.063644 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 09:54:25.563618465 +0000 UTC m=+138.007248732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.063739 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:25 crc kubenswrapper[5101]: E0122 09:54:25.064643 5101 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:54:25.564608994 +0000 UTC m=+138.008239441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7pcd5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.123242 5101 ???:1] "http: TLS handshake error from 192.168.126.11:33870: no serving certificate available for the kubelet" Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.135119 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" event={"ID":"932ff910-1ca7-4354-a306-1ce5f15f4f92","Type":"ContainerStarted","Data":"fc5a36c4e38395fb93ccd78e0ddb25798600227b416b82639ebb2fd28172d4f4"} Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.135245 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" event={"ID":"932ff910-1ca7-4354-a306-1ce5f15f4f92","Type":"ContainerStarted","Data":"a2d94386930cd231e2d3acf0b9ce468d08ac5deb751cc05cb25fe945e807b027"} Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.139654 5101 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T09:54:24.228018448Z","UUID":"f52fad9e-2665-4c1b-b1bf-f17f16f805f7","Handler":null,"Name":"","Endpoint":""} Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.163027 5101 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.163091 5101 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.165207 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.276112 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hzj6r" Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.301949 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.367696 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.411974 5101 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.412121 5101 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.457584 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7pcd5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.545811 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.553610 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.745213 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:25 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:25 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:25 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:25 crc kubenswrapper[5101]: I0122 09:54:25.745805 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:26 crc kubenswrapper[5101]: I0122 09:54:26.112630 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:54:26 crc kubenswrapper[5101]: I0122 09:54:26.172000 5101 generic.go:358] "Generic (PLEG): container finished" podID="7c406081-12e2-4b84-96e1-8c79712fcda4" containerID="7d853fbefb1c465e91b29b32e22f52cc94bbbadc2626739ac5f071c252bcada0" exitCode=0 Jan 22 09:54:26 crc kubenswrapper[5101]: I0122 09:54:26.172080 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"7c406081-12e2-4b84-96e1-8c79712fcda4","Type":"ContainerDied","Data":"7d853fbefb1c465e91b29b32e22f52cc94bbbadc2626739ac5f071c252bcada0"} Jan 22 09:54:26 crc kubenswrapper[5101]: I0122 09:54:26.177454 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" event={"ID":"932ff910-1ca7-4354-a306-1ce5f15f4f92","Type":"ContainerStarted","Data":"f7e998b3c7925145e20583f1d8771ec74d3042e2d59b39fc4393b18406daf3b6"} Jan 22 09:54:26 crc kubenswrapper[5101]: I0122 09:54:26.287548 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gf9jd" podStartSLOduration=26.287514692 podStartE2EDuration="26.287514692s" podCreationTimestamp="2026-01-22 09:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:26.273863221 +0000 UTC m=+138.717493658" watchObservedRunningTime="2026-01-22 09:54:26.287514692 +0000 UTC m=+138.731144959" Jan 22 09:54:26 crc kubenswrapper[5101]: I0122 09:54:26.568807 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 22 09:54:26 crc kubenswrapper[5101]: I0122 09:54:26.570153 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-7pcd5"] Jan 22 09:54:26 crc kubenswrapper[5101]: I0122 09:54:26.632139 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:26 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:26 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:26 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:26 crc kubenswrapper[5101]: I0122 09:54:26.632276 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:27 crc kubenswrapper[5101]: I0122 09:54:27.198225 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:27 crc kubenswrapper[5101]: I0122 09:54:27.206026 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-w5j22" Jan 22 09:54:27 crc kubenswrapper[5101]: E0122 09:54:27.440239 5101 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 09:54:27 crc kubenswrapper[5101]: E0122 09:54:27.477242 5101 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 09:54:27 crc kubenswrapper[5101]: E0122 09:54:27.479610 5101 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 09:54:27 crc kubenswrapper[5101]: E0122 09:54:27.479743 5101 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" podUID="a6a20a61-7a61-4f52-b57c-c289c661f268" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 09:54:27 crc kubenswrapper[5101]: I0122 09:54:27.634150 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:27 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:27 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:27 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:27 crc kubenswrapper[5101]: I0122 09:54:27.634234 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:28 crc kubenswrapper[5101]: I0122 09:54:28.628677 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:28 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:28 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:28 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:28 crc kubenswrapper[5101]: I0122 09:54:28.628852 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:29 crc kubenswrapper[5101]: I0122 09:54:29.632665 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:29 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:29 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:29 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:29 crc kubenswrapper[5101]: I0122 09:54:29.632753 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:29 crc kubenswrapper[5101]: I0122 09:54:29.993406 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.085561 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c406081-12e2-4b84-96e1-8c79712fcda4-kubelet-dir\") pod \"7c406081-12e2-4b84-96e1-8c79712fcda4\" (UID: \"7c406081-12e2-4b84-96e1-8c79712fcda4\") " Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.085921 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c406081-12e2-4b84-96e1-8c79712fcda4-kube-api-access\") pod \"7c406081-12e2-4b84-96e1-8c79712fcda4\" (UID: \"7c406081-12e2-4b84-96e1-8c79712fcda4\") " Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.086612 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c406081-12e2-4b84-96e1-8c79712fcda4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7c406081-12e2-4b84-96e1-8c79712fcda4" (UID: "7c406081-12e2-4b84-96e1-8c79712fcda4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.102923 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c406081-12e2-4b84-96e1-8c79712fcda4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7c406081-12e2-4b84-96e1-8c79712fcda4" (UID: "7c406081-12e2-4b84-96e1-8c79712fcda4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.122467 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.122640 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.216114 5101 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c406081-12e2-4b84-96e1-8c79712fcda4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.216185 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c406081-12e2-4b84-96e1-8c79712fcda4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.234786 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"7c406081-12e2-4b84-96e1-8c79712fcda4","Type":"ContainerDied","Data":"ccfb183121a2ee6d7e6534202c0b9739b1976464b7f6157a2245a4c5857cdb49"} Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.234844 5101 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccfb183121a2ee6d7e6534202c0b9739b1976464b7f6157a2245a4c5857cdb49" Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.234965 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 09:54:30 crc kubenswrapper[5101]: E0122 09:54:30.447528 5101 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod7c406081_12e2_4b84_96e1_8c79712fcda4.slice\": RecentStats: unable to find data in memory cache]" Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.632227 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:30 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:30 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:30 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:30 crc kubenswrapper[5101]: I0122 09:54:30.632473 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:31 crc kubenswrapper[5101]: I0122 09:54:31.629009 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:31 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:31 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:31 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:31 crc kubenswrapper[5101]: I0122 09:54:31.629136 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:32 crc kubenswrapper[5101]: I0122 09:54:32.108235 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:32 crc kubenswrapper[5101]: I0122 09:54:32.108300 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:32 crc kubenswrapper[5101]: I0122 09:54:32.108352 5101 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-w2759" Jan 22 09:54:32 crc kubenswrapper[5101]: I0122 09:54:32.109008 5101 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"aa346317e28affddb8798534b89e6cd17c995d4e4cca297ed1d891a3d6fe52f7"} pod="openshift-console/downloads-747b44746d-w2759" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 22 09:54:32 crc kubenswrapper[5101]: I0122 09:54:32.109071 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" containerID="cri-o://aa346317e28affddb8798534b89e6cd17c995d4e4cca297ed1d891a3d6fe52f7" gracePeriod=2 Jan 22 09:54:32 crc kubenswrapper[5101]: I0122 09:54:32.109833 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:32 crc kubenswrapper[5101]: I0122 09:54:32.109855 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:32 crc kubenswrapper[5101]: I0122 09:54:32.644143 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:32 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:32 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:32 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:32 crc kubenswrapper[5101]: I0122 09:54:32.644271 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:33 crc kubenswrapper[5101]: I0122 09:54:33.283229 5101 generic.go:358] "Generic (PLEG): container finished" podID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerID="aa346317e28affddb8798534b89e6cd17c995d4e4cca297ed1d891a3d6fe52f7" exitCode=0 Jan 22 09:54:33 crc kubenswrapper[5101]: I0122 09:54:33.283307 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-w2759" event={"ID":"ada11655-156b-4b1e-ad19-8391c89c8e6b","Type":"ContainerDied","Data":"aa346317e28affddb8798534b89e6cd17c995d4e4cca297ed1d891a3d6fe52f7"} Jan 22 09:54:33 crc kubenswrapper[5101]: I0122 09:54:33.455001 5101 patch_prober.go:28] interesting pod/console-64d44f6ddf-hwdqt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 22 09:54:33 crc kubenswrapper[5101]: I0122 09:54:33.455114 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-hwdqt" podUID="1bf878a0-4591-4ee2-96e9-db36fe28422d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 22 09:54:33 crc kubenswrapper[5101]: I0122 09:54:33.629384 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:33 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:33 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:33 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:33 crc kubenswrapper[5101]: I0122 09:54:33.629491 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:34 crc kubenswrapper[5101]: I0122 09:54:34.290906 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" event={"ID":"b182bd55-8225-4386-aa02-40b8c9358df5","Type":"ContainerStarted","Data":"a465d5da2511aec60941cdfc504ba30ae4d06c359286367bd2dc500f5bc4d81d"} Jan 22 09:54:34 crc kubenswrapper[5101]: I0122 09:54:34.629734 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:34 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:34 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:34 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:34 crc kubenswrapper[5101]: I0122 09:54:34.629822 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:35 crc kubenswrapper[5101]: I0122 09:54:35.390240 5101 ???:1] "http: TLS handshake error from 192.168.126.11:53762: no serving certificate available for the kubelet" Jan 22 09:54:35 crc kubenswrapper[5101]: I0122 09:54:35.629359 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:35 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:35 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:35 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:35 crc kubenswrapper[5101]: I0122 09:54:35.629617 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:36 crc kubenswrapper[5101]: I0122 09:54:36.630234 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:36 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:36 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:36 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:36 crc kubenswrapper[5101]: I0122 09:54:36.630359 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:37 crc kubenswrapper[5101]: E0122 09:54:37.437287 5101 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 09:54:37 crc kubenswrapper[5101]: E0122 09:54:37.439138 5101 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 09:54:37 crc kubenswrapper[5101]: E0122 09:54:37.443334 5101 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 09:54:37 crc kubenswrapper[5101]: E0122 09:54:37.443446 5101 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" podUID="a6a20a61-7a61-4f52-b57c-c289c661f268" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 09:54:37 crc kubenswrapper[5101]: I0122 09:54:37.629664 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:37 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:37 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:37 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:37 crc kubenswrapper[5101]: I0122 09:54:37.629756 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:38 crc kubenswrapper[5101]: I0122 09:54:38.633687 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:38 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:38 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:38 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:38 crc kubenswrapper[5101]: I0122 09:54:38.633779 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:39 crc kubenswrapper[5101]: I0122 09:54:39.630860 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:39 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:39 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:39 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:39 crc kubenswrapper[5101]: I0122 09:54:39.631007 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:40 crc kubenswrapper[5101]: I0122 09:54:40.629960 5101 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrw7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:54:40 crc kubenswrapper[5101]: [-]has-synced failed: reason withheld Jan 22 09:54:40 crc kubenswrapper[5101]: [+]process-running ok Jan 22 09:54:40 crc kubenswrapper[5101]: healthz check failed Jan 22 09:54:40 crc kubenswrapper[5101]: I0122 09:54:40.630048 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" podUID="027bf0e3-cc9b-4a15-85ca-75cdb81a7a63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:54:41 crc kubenswrapper[5101]: I0122 09:54:41.524061 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ldwwl" Jan 22 09:54:41 crc kubenswrapper[5101]: I0122 09:54:41.805416 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:41 crc kubenswrapper[5101]: I0122 09:54:41.810255 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-jrw7k" Jan 22 09:54:42 crc kubenswrapper[5101]: I0122 09:54:42.110607 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:42 crc kubenswrapper[5101]: I0122 09:54:42.110709 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:43 crc kubenswrapper[5101]: I0122 09:54:43.454940 5101 patch_prober.go:28] interesting pod/console-64d44f6ddf-hwdqt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 22 09:54:43 crc kubenswrapper[5101]: I0122 09:54:43.455356 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-hwdqt" podUID="1bf878a0-4591-4ee2-96e9-db36fe28422d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 22 09:54:43 crc kubenswrapper[5101]: I0122 09:54:43.583563 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-l6rf4_a6a20a61-7a61-4f52-b57c-c289c661f268/kube-multus-additional-cni-plugins/0.log" Jan 22 09:54:43 crc kubenswrapper[5101]: I0122 09:54:43.583606 5101 generic.go:358] "Generic (PLEG): container finished" podID="a6a20a61-7a61-4f52-b57c-c289c661f268" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" exitCode=137 Jan 22 09:54:43 crc kubenswrapper[5101]: I0122 09:54:43.583711 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" event={"ID":"a6a20a61-7a61-4f52-b57c-c289c661f268","Type":"ContainerDied","Data":"397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4"} Jan 22 09:54:47 crc kubenswrapper[5101]: E0122 09:54:47.435494 5101 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4 is running failed: container process not found" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 09:54:47 crc kubenswrapper[5101]: E0122 09:54:47.436018 5101 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4 is running failed: container process not found" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 09:54:47 crc kubenswrapper[5101]: E0122 09:54:47.436483 5101 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4 is running failed: container process not found" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 09:54:47 crc kubenswrapper[5101]: E0122 09:54:47.436515 5101 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" podUID="a6a20a61-7a61-4f52-b57c-c289c661f268" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.110610 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.111094 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.756312 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.757182 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c406081-12e2-4b84-96e1-8c79712fcda4" containerName="pruner" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.757207 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c406081-12e2-4b84-96e1-8c79712fcda4" containerName="pruner" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.757236 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5f642d6e-f3f5-4551-b1c7-ccf416fe502b" containerName="pruner" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.757245 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f642d6e-f3f5-4551-b1c7-ccf416fe502b" containerName="pruner" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.757380 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="5f642d6e-f3f5-4551-b1c7-ccf416fe502b" containerName="pruner" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.757403 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="7c406081-12e2-4b84-96e1-8c79712fcda4" containerName="pruner" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.876922 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.877107 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.879595 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.879888 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.951135 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.986861 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab7ec671-bb9f-4656-9f0a-38de344d05c8-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"ab7ec671-bb9f-4656-9f0a-38de344d05c8\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 09:54:52 crc kubenswrapper[5101]: I0122 09:54:52.986994 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab7ec671-bb9f-4656-9f0a-38de344d05c8-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"ab7ec671-bb9f-4656-9f0a-38de344d05c8\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.088518 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab7ec671-bb9f-4656-9f0a-38de344d05c8-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"ab7ec671-bb9f-4656-9f0a-38de344d05c8\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.088665 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab7ec671-bb9f-4656-9f0a-38de344d05c8-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"ab7ec671-bb9f-4656-9f0a-38de344d05c8\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.088777 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab7ec671-bb9f-4656-9f0a-38de344d05c8-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"ab7ec671-bb9f-4656-9f0a-38de344d05c8\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.118204 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab7ec671-bb9f-4656-9f0a-38de344d05c8-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"ab7ec671-bb9f-4656-9f0a-38de344d05c8\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.203948 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.461159 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.466379 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-hwdqt" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.563919 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-l6rf4_a6a20a61-7a61-4f52-b57c-c289c661f268/kube-multus-additional-cni-plugins/0.log" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.564031 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.596318 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64tqs\" (UniqueName: \"kubernetes.io/projected/a6a20a61-7a61-4f52-b57c-c289c661f268-kube-api-access-64tqs\") pod \"a6a20a61-7a61-4f52-b57c-c289c661f268\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.596377 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a6a20a61-7a61-4f52-b57c-c289c661f268-ready\") pod \"a6a20a61-7a61-4f52-b57c-c289c661f268\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.596631 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6a20a61-7a61-4f52-b57c-c289c661f268-tuning-conf-dir\") pod \"a6a20a61-7a61-4f52-b57c-c289c661f268\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.596733 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6a20a61-7a61-4f52-b57c-c289c661f268-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "a6a20a61-7a61-4f52-b57c-c289c661f268" (UID: "a6a20a61-7a61-4f52-b57c-c289c661f268"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.596753 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a6a20a61-7a61-4f52-b57c-c289c661f268-cni-sysctl-allowlist\") pod \"a6a20a61-7a61-4f52-b57c-c289c661f268\" (UID: \"a6a20a61-7a61-4f52-b57c-c289c661f268\") " Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.596882 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6a20a61-7a61-4f52-b57c-c289c661f268-ready" (OuterVolumeSpecName: "ready") pod "a6a20a61-7a61-4f52-b57c-c289c661f268" (UID: "a6a20a61-7a61-4f52-b57c-c289c661f268"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.597111 5101 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a6a20a61-7a61-4f52-b57c-c289c661f268-ready\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.597132 5101 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6a20a61-7a61-4f52-b57c-c289c661f268-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.597334 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6a20a61-7a61-4f52-b57c-c289c661f268-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "a6a20a61-7a61-4f52-b57c-c289c661f268" (UID: "a6a20a61-7a61-4f52-b57c-c289c661f268"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.603619 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a20a61-7a61-4f52-b57c-c289c661f268-kube-api-access-64tqs" (OuterVolumeSpecName: "kube-api-access-64tqs") pod "a6a20a61-7a61-4f52-b57c-c289c661f268" (UID: "a6a20a61-7a61-4f52-b57c-c289c661f268"). InnerVolumeSpecName "kube-api-access-64tqs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.698521 5101 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a6a20a61-7a61-4f52-b57c-c289c661f268-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.698559 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-64tqs\" (UniqueName: \"kubernetes.io/projected/a6a20a61-7a61-4f52-b57c-c289c661f268-kube-api-access-64tqs\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.739238 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-l6rf4_a6a20a61-7a61-4f52-b57c-c289c661f268/kube-multus-additional-cni-plugins/0.log" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.740088 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.740678 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l6rf4" event={"ID":"a6a20a61-7a61-4f52-b57c-c289c661f268","Type":"ContainerDied","Data":"2c10f229a080575068b84f9499107fbe4640f70e3123011ec6fd27b45ce47090"} Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.740759 5101 scope.go:117] "RemoveContainer" containerID="397bcb29892ff878cb14e94a9cb86cfb5a7633c63de927ad62eec89e33473cf4" Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.801113 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l6rf4"] Jan 22 09:54:53 crc kubenswrapper[5101]: I0122 09:54:53.805186 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l6rf4"] Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.535558 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6a20a61-7a61-4f52-b57c-c289c661f268" path="/var/lib/kubelet/pods/a6a20a61-7a61-4f52-b57c-c289c661f268/volumes" Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.828171 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdgc" event={"ID":"de72e1f1-e5ac-4a87-9b4b-aa2c16527255","Type":"ContainerStarted","Data":"4cd5c5382b001db821961f720b87c45e966270815dd2a2cc5636b6a6600a5028"} Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.830249 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzkjb" event={"ID":"21c1a591-9051-4ab4-883b-c6a2cf1aecff","Type":"ContainerStarted","Data":"e24d1e71d6b4c45531d7bcd0c199a7cb64360495e2390d41a434e6937915a19d"} Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.832145 5101 generic.go:358] "Generic (PLEG): container finished" podID="0fa3648a-30f1-4fba-8830-a4c93ff9a88b" containerID="42426a318541962c802dc1ddf4d7606f54fc5ce7d9d2fffc71f9d3ab5717bd91" exitCode=0 Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.832321 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5s8n" event={"ID":"0fa3648a-30f1-4fba-8830-a4c93ff9a88b","Type":"ContainerDied","Data":"42426a318541962c802dc1ddf4d7606f54fc5ce7d9d2fffc71f9d3ab5717bd91"} Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.836219 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc6g7" event={"ID":"6d1ac98b-01eb-4125-837f-28a4429c09c6","Type":"ContainerStarted","Data":"68895e911f831033fb5f7f4349e7afdbcddb4f327922ea4f860092f39a3fa598"} Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.839083 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdkwz" event={"ID":"fc21b80b-c600-46ec-b79a-8988ef57da90","Type":"ContainerStarted","Data":"f7608676d31883cba846883a7a8baa0a1d52539413582cc0f0858953e1d8adb0"} Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.841623 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t7v9" event={"ID":"2d87939b-ab96-41d5-ad67-0b52de7b0613","Type":"ContainerStarted","Data":"03afc534f78a6ab5bc62c348a5fa466a8b30450e4c09c96b2de3408991679404"} Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.843903 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-w2759" event={"ID":"ada11655-156b-4b1e-ad19-8391c89c8e6b","Type":"ContainerStarted","Data":"167c1949e622febce160428028be14c901f5243321edc2d38800718a63225341"} Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.847903 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z79d9" event={"ID":"e788d99a-4b7e-4d84-bf22-394fb29a2382","Type":"ContainerStarted","Data":"b78819dfd5b403e1060fdb15ec07a0325bf368a96e076deef6e1bf6dedabe85f"} Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.852705 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" event={"ID":"b182bd55-8225-4386-aa02-40b8c9358df5","Type":"ContainerStarted","Data":"743114f6f0ee4657c48650fad0701dd85d8bc16d9e13d5e3772c8eb161e867a2"} Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.855665 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p79nv" event={"ID":"7e8d5b04-69ec-44a1-adfe-7dfc917e4530","Type":"ContainerStarted","Data":"f098741946d79f38a237ac42974cea31a43cd011f771513f637eb7b110779952"} Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.922960 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-w2759" Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.923089 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.923131 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.923635 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:54:54 crc kubenswrapper[5101]: I0122 09:54:54.938385 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 09:54:55 crc kubenswrapper[5101]: I0122 09:54:55.235909 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" podStartSLOduration=144.235878233 podStartE2EDuration="2m24.235878233s" podCreationTimestamp="2026-01-22 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:55.230979743 +0000 UTC m=+167.674610010" watchObservedRunningTime="2026-01-22 09:54:55.235878233 +0000 UTC m=+167.679508500" Jan 22 09:54:55 crc kubenswrapper[5101]: W0122 09:54:55.458793 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podab7ec671_bb9f_4656_9f0a_38de344d05c8.slice/crio-b710743b6a1fa3c216acdc84ec11098b49e21e5821931cfd0be23ca7d51cbb30 WatchSource:0}: Error finding container b710743b6a1fa3c216acdc84ec11098b49e21e5821931cfd0be23ca7d51cbb30: Status 404 returned error can't find the container with id b710743b6a1fa3c216acdc84ec11098b49e21e5821931cfd0be23ca7d51cbb30 Jan 22 09:54:55 crc kubenswrapper[5101]: I0122 09:54:55.867382 5101 generic.go:358] "Generic (PLEG): container finished" podID="fc21b80b-c600-46ec-b79a-8988ef57da90" containerID="f7608676d31883cba846883a7a8baa0a1d52539413582cc0f0858953e1d8adb0" exitCode=0 Jan 22 09:54:55 crc kubenswrapper[5101]: I0122 09:54:55.867503 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdkwz" event={"ID":"fc21b80b-c600-46ec-b79a-8988ef57da90","Type":"ContainerDied","Data":"f7608676d31883cba846883a7a8baa0a1d52539413582cc0f0858953e1d8adb0"} Jan 22 09:54:55 crc kubenswrapper[5101]: I0122 09:54:55.880351 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ab7ec671-bb9f-4656-9f0a-38de344d05c8","Type":"ContainerStarted","Data":"b710743b6a1fa3c216acdc84ec11098b49e21e5821931cfd0be23ca7d51cbb30"} Jan 22 09:54:55 crc kubenswrapper[5101]: I0122 09:54:55.889551 5101 generic.go:358] "Generic (PLEG): container finished" podID="7e8d5b04-69ec-44a1-adfe-7dfc917e4530" containerID="f098741946d79f38a237ac42974cea31a43cd011f771513f637eb7b110779952" exitCode=0 Jan 22 09:54:55 crc kubenswrapper[5101]: I0122 09:54:55.889745 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p79nv" event={"ID":"7e8d5b04-69ec-44a1-adfe-7dfc917e4530","Type":"ContainerDied","Data":"f098741946d79f38a237ac42974cea31a43cd011f771513f637eb7b110779952"} Jan 22 09:54:55 crc kubenswrapper[5101]: I0122 09:54:55.915901 5101 ???:1] "http: TLS handshake error from 192.168.126.11:50330: no serving certificate available for the kubelet" Jan 22 09:54:55 crc kubenswrapper[5101]: I0122 09:54:55.975964 5101 generic.go:358] "Generic (PLEG): container finished" podID="21c1a591-9051-4ab4-883b-c6a2cf1aecff" containerID="e24d1e71d6b4c45531d7bcd0c199a7cb64360495e2390d41a434e6937915a19d" exitCode=0 Jan 22 09:54:55 crc kubenswrapper[5101]: I0122 09:54:55.978733 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzkjb" event={"ID":"21c1a591-9051-4ab4-883b-c6a2cf1aecff","Type":"ContainerDied","Data":"e24d1e71d6b4c45531d7bcd0c199a7cb64360495e2390d41a434e6937915a19d"} Jan 22 09:54:55 crc kubenswrapper[5101]: I0122 09:54:55.980779 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:55 crc kubenswrapper[5101]: I0122 09:54:55.980862 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:57 crc kubenswrapper[5101]: I0122 09:54:57.035714 5101 generic.go:358] "Generic (PLEG): container finished" podID="de72e1f1-e5ac-4a87-9b4b-aa2c16527255" containerID="4cd5c5382b001db821961f720b87c45e966270815dd2a2cc5636b6a6600a5028" exitCode=0 Jan 22 09:54:57 crc kubenswrapper[5101]: I0122 09:54:57.035811 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdgc" event={"ID":"de72e1f1-e5ac-4a87-9b4b-aa2c16527255","Type":"ContainerDied","Data":"4cd5c5382b001db821961f720b87c45e966270815dd2a2cc5636b6a6600a5028"} Jan 22 09:54:57 crc kubenswrapper[5101]: I0122 09:54:57.047569 5101 generic.go:358] "Generic (PLEG): container finished" podID="e788d99a-4b7e-4d84-bf22-394fb29a2382" containerID="b78819dfd5b403e1060fdb15ec07a0325bf368a96e076deef6e1bf6dedabe85f" exitCode=0 Jan 22 09:54:57 crc kubenswrapper[5101]: I0122 09:54:57.047682 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z79d9" event={"ID":"e788d99a-4b7e-4d84-bf22-394fb29a2382","Type":"ContainerDied","Data":"b78819dfd5b403e1060fdb15ec07a0325bf368a96e076deef6e1bf6dedabe85f"} Jan 22 09:54:57 crc kubenswrapper[5101]: I0122 09:54:57.086529 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:54:57 crc kubenswrapper[5101]: I0122 09:54:57.086896 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.054850 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p79nv" event={"ID":"7e8d5b04-69ec-44a1-adfe-7dfc917e4530","Type":"ContainerStarted","Data":"1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10"} Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.058315 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdgc" event={"ID":"de72e1f1-e5ac-4a87-9b4b-aa2c16527255","Type":"ContainerStarted","Data":"04549feb994c2b74dc069466d54b7ab3259848b8ec705756ef416f0778f892f8"} Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.061308 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzkjb" event={"ID":"21c1a591-9051-4ab4-883b-c6a2cf1aecff","Type":"ContainerStarted","Data":"71609ad87fefdf289ea2ee2b1f0125b35dd8ec1ce52d7137527b29ead9c42c04"} Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.065613 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5s8n" event={"ID":"0fa3648a-30f1-4fba-8830-a4c93ff9a88b","Type":"ContainerStarted","Data":"10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37"} Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.071163 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdkwz" event={"ID":"fc21b80b-c600-46ec-b79a-8988ef57da90","Type":"ContainerStarted","Data":"2e9903f352101c461b64723325ba12c9bd801210aa8c0aa1bb1a2a3fc7d2e0b6"} Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.072611 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ab7ec671-bb9f-4656-9f0a-38de344d05c8","Type":"ContainerStarted","Data":"87370abc5ccabf67a6db65d8d4821f5f36c074d3d6305972f5fc170ed09079cc"} Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.074647 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z79d9" event={"ID":"e788d99a-4b7e-4d84-bf22-394fb29a2382","Type":"ContainerStarted","Data":"35fa76b3d3f6e3daf6928e8e074aae3069c434cd14a0fc571d9b16f06fe71da9"} Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.146525 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p79nv" podStartSLOduration=11.867941442 podStartE2EDuration="45.14650034s" podCreationTimestamp="2026-01-22 09:54:13 +0000 UTC" firstStartedPulling="2026-01-22 09:54:20.75164566 +0000 UTC m=+133.195275927" lastFinishedPulling="2026-01-22 09:54:54.030204538 +0000 UTC m=+166.473834825" observedRunningTime="2026-01-22 09:54:58.141016193 +0000 UTC m=+170.584646480" watchObservedRunningTime="2026-01-22 09:54:58.14650034 +0000 UTC m=+170.590130617" Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.193033 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4bdgc" podStartSLOduration=11.864598937 podStartE2EDuration="45.193006412s" podCreationTimestamp="2026-01-22 09:54:13 +0000 UTC" firstStartedPulling="2026-01-22 09:54:20.766056673 +0000 UTC m=+133.209686940" lastFinishedPulling="2026-01-22 09:54:54.094464158 +0000 UTC m=+166.538094415" observedRunningTime="2026-01-22 09:54:58.185377214 +0000 UTC m=+170.629007481" watchObservedRunningTime="2026-01-22 09:54:58.193006412 +0000 UTC m=+170.636636679" Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.209002 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=6.208975699 podStartE2EDuration="6.208975699s" podCreationTimestamp="2026-01-22 09:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:54:58.207366403 +0000 UTC m=+170.650996680" watchObservedRunningTime="2026-01-22 09:54:58.208975699 +0000 UTC m=+170.652605966" Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.233851 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lzkjb" podStartSLOduration=9.910856932 podStartE2EDuration="43.233827411s" podCreationTimestamp="2026-01-22 09:54:15 +0000 UTC" firstStartedPulling="2026-01-22 09:54:20.770633424 +0000 UTC m=+133.214263691" lastFinishedPulling="2026-01-22 09:54:54.093603903 +0000 UTC m=+166.537234170" observedRunningTime="2026-01-22 09:54:58.23028641 +0000 UTC m=+170.673916697" watchObservedRunningTime="2026-01-22 09:54:58.233827411 +0000 UTC m=+170.677457678" Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.250814 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z79d9" podStartSLOduration=11.948596243 podStartE2EDuration="45.250797587s" podCreationTimestamp="2026-01-22 09:54:13 +0000 UTC" firstStartedPulling="2026-01-22 09:54:20.728017934 +0000 UTC m=+133.171648201" lastFinishedPulling="2026-01-22 09:54:54.030219278 +0000 UTC m=+166.473849545" observedRunningTime="2026-01-22 09:54:58.24847224 +0000 UTC m=+170.692102527" watchObservedRunningTime="2026-01-22 09:54:58.250797587 +0000 UTC m=+170.694427854" Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.266613 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hdkwz" podStartSLOduration=13.040622413 podStartE2EDuration="45.266593039s" podCreationTimestamp="2026-01-22 09:54:13 +0000 UTC" firstStartedPulling="2026-01-22 09:54:21.867600276 +0000 UTC m=+134.311230543" lastFinishedPulling="2026-01-22 09:54:54.093570902 +0000 UTC m=+166.537201169" observedRunningTime="2026-01-22 09:54:58.263808459 +0000 UTC m=+170.707438736" watchObservedRunningTime="2026-01-22 09:54:58.266593039 +0000 UTC m=+170.710223306" Jan 22 09:54:58 crc kubenswrapper[5101]: I0122 09:54:58.284950 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k5s8n" podStartSLOduration=11.287845392 podStartE2EDuration="43.284933524s" podCreationTimestamp="2026-01-22 09:54:15 +0000 UTC" firstStartedPulling="2026-01-22 09:54:21.818599503 +0000 UTC m=+134.262229760" lastFinishedPulling="2026-01-22 09:54:53.815687625 +0000 UTC m=+166.259317892" observedRunningTime="2026-01-22 09:54:58.284215504 +0000 UTC m=+170.727845771" watchObservedRunningTime="2026-01-22 09:54:58.284933524 +0000 UTC m=+170.728563791" Jan 22 09:55:00 crc kubenswrapper[5101]: I0122 09:55:00.623947 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 09:55:00 crc kubenswrapper[5101]: I0122 09:55:00.624775 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a6a20a61-7a61-4f52-b57c-c289c661f268" containerName="kube-multus-additional-cni-plugins" Jan 22 09:55:00 crc kubenswrapper[5101]: I0122 09:55:00.624799 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a20a61-7a61-4f52-b57c-c289c661f268" containerName="kube-multus-additional-cni-plugins" Jan 22 09:55:00 crc kubenswrapper[5101]: I0122 09:55:00.624947 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="a6a20a61-7a61-4f52-b57c-c289c661f268" containerName="kube-multus-additional-cni-plugins" Jan 22 09:55:00 crc kubenswrapper[5101]: I0122 09:55:00.922901 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 09:55:00 crc kubenswrapper[5101]: I0122 09:55:00.923114 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:00.997287 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-kubelet-dir\") pod \"installer-12-crc\" (UID: \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:00.997395 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-kube-api-access\") pod \"installer-12-crc\" (UID: \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:00.997533 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-var-lock\") pod \"installer-12-crc\" (UID: \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:01.098791 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-kubelet-dir\") pod \"installer-12-crc\" (UID: \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:01.099190 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-kube-api-access\") pod \"installer-12-crc\" (UID: \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:01.099270 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-var-lock\") pod \"installer-12-crc\" (UID: \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:01.098970 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-kubelet-dir\") pod \"installer-12-crc\" (UID: \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:01.099641 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-var-lock\") pod \"installer-12-crc\" (UID: \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:01.122619 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-kube-api-access\") pod \"installer-12-crc\" (UID: \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:01.302678 5101 generic.go:358] "Generic (PLEG): container finished" podID="6d1ac98b-01eb-4125-837f-28a4429c09c6" containerID="68895e911f831033fb5f7f4349e7afdbcddb4f327922ea4f860092f39a3fa598" exitCode=0 Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:01.302780 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc6g7" event={"ID":"6d1ac98b-01eb-4125-837f-28a4429c09c6","Type":"ContainerDied","Data":"68895e911f831033fb5f7f4349e7afdbcddb4f327922ea4f860092f39a3fa598"} Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:01.305770 5101 generic.go:358] "Generic (PLEG): container finished" podID="ab7ec671-bb9f-4656-9f0a-38de344d05c8" containerID="87370abc5ccabf67a6db65d8d4821f5f36c074d3d6305972f5fc170ed09079cc" exitCode=0 Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:01.305876 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ab7ec671-bb9f-4656-9f0a-38de344d05c8","Type":"ContainerDied","Data":"87370abc5ccabf67a6db65d8d4821f5f36c074d3d6305972f5fc170ed09079cc"} Jan 22 09:55:01 crc kubenswrapper[5101]: I0122 09:55:01.426169 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:02 crc kubenswrapper[5101]: I0122 09:55:02.067475 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 09:55:02 crc kubenswrapper[5101]: I0122 09:55:02.213725 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:55:02 crc kubenswrapper[5101]: I0122 09:55:02.213795 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:55:02 crc kubenswrapper[5101]: I0122 09:55:02.317596 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3","Type":"ContainerStarted","Data":"62518e5d905f71fc025d21ec7cf2d75c0c7839e167c57680418494ca651de8b6"} Jan 22 09:55:02 crc kubenswrapper[5101]: I0122 09:55:02.320057 5101 generic.go:358] "Generic (PLEG): container finished" podID="2d87939b-ab96-41d5-ad67-0b52de7b0613" containerID="03afc534f78a6ab5bc62c348a5fa466a8b30450e4c09c96b2de3408991679404" exitCode=0 Jan 22 09:55:02 crc kubenswrapper[5101]: I0122 09:55:02.320116 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t7v9" event={"ID":"2d87939b-ab96-41d5-ad67-0b52de7b0613","Type":"ContainerDied","Data":"03afc534f78a6ab5bc62c348a5fa466a8b30450e4c09c96b2de3408991679404"} Jan 22 09:55:03 crc kubenswrapper[5101]: I0122 09:55:03.333093 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc6g7" event={"ID":"6d1ac98b-01eb-4125-837f-28a4429c09c6","Type":"ContainerStarted","Data":"bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2"} Jan 22 09:55:03 crc kubenswrapper[5101]: I0122 09:55:03.464094 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 09:55:03 crc kubenswrapper[5101]: I0122 09:55:03.659304 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab7ec671-bb9f-4656-9f0a-38de344d05c8-kubelet-dir\") pod \"ab7ec671-bb9f-4656-9f0a-38de344d05c8\" (UID: \"ab7ec671-bb9f-4656-9f0a-38de344d05c8\") " Jan 22 09:55:03 crc kubenswrapper[5101]: I0122 09:55:03.659437 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab7ec671-bb9f-4656-9f0a-38de344d05c8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ab7ec671-bb9f-4656-9f0a-38de344d05c8" (UID: "ab7ec671-bb9f-4656-9f0a-38de344d05c8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:55:03 crc kubenswrapper[5101]: I0122 09:55:03.659620 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab7ec671-bb9f-4656-9f0a-38de344d05c8-kube-api-access\") pod \"ab7ec671-bb9f-4656-9f0a-38de344d05c8\" (UID: \"ab7ec671-bb9f-4656-9f0a-38de344d05c8\") " Jan 22 09:55:03 crc kubenswrapper[5101]: I0122 09:55:03.659927 5101 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab7ec671-bb9f-4656-9f0a-38de344d05c8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:03 crc kubenswrapper[5101]: I0122 09:55:03.670187 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7ec671-bb9f-4656-9f0a-38de344d05c8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ab7ec671-bb9f-4656-9f0a-38de344d05c8" (UID: "ab7ec671-bb9f-4656-9f0a-38de344d05c8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:55:03 crc kubenswrapper[5101]: I0122 09:55:03.760882 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab7ec671-bb9f-4656-9f0a-38de344d05c8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:04 crc kubenswrapper[5101]: I0122 09:55:04.341321 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t7v9" event={"ID":"2d87939b-ab96-41d5-ad67-0b52de7b0613","Type":"ContainerStarted","Data":"fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f"} Jan 22 09:55:04 crc kubenswrapper[5101]: I0122 09:55:04.343303 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 09:55:04 crc kubenswrapper[5101]: I0122 09:55:04.344172 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ab7ec671-bb9f-4656-9f0a-38de344d05c8","Type":"ContainerDied","Data":"b710743b6a1fa3c216acdc84ec11098b49e21e5821931cfd0be23ca7d51cbb30"} Jan 22 09:55:04 crc kubenswrapper[5101]: I0122 09:55:04.344216 5101 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b710743b6a1fa3c216acdc84ec11098b49e21e5821931cfd0be23ca7d51cbb30" Jan 22 09:55:04 crc kubenswrapper[5101]: I0122 09:55:04.977955 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:55:04 crc kubenswrapper[5101]: I0122 09:55:04.979115 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.168738 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.189647 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dc6g7" podStartSLOduration=16.895975589 podStartE2EDuration="49.189621772s" podCreationTimestamp="2026-01-22 09:54:16 +0000 UTC" firstStartedPulling="2026-01-22 09:54:21.826011896 +0000 UTC m=+134.269642163" lastFinishedPulling="2026-01-22 09:54:54.119658079 +0000 UTC m=+166.563288346" observedRunningTime="2026-01-22 09:55:04.4450275 +0000 UTC m=+176.888657767" watchObservedRunningTime="2026-01-22 09:55:05.189621772 +0000 UTC m=+177.633252049" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.292706 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.293411 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.350194 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3","Type":"ContainerStarted","Data":"bad55e13ee2180f037e10e50c92eb4654ed7ba79fe3eadddc52db3bf8691ce25"} Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.377662 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=5.377646347 podStartE2EDuration="5.377646347s" podCreationTimestamp="2026-01-22 09:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:55:05.369808882 +0000 UTC m=+177.813439149" watchObservedRunningTime="2026-01-22 09:55:05.377646347 +0000 UTC m=+177.821276604" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.400330 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7t7v9" podStartSLOduration=17.053737826 podStartE2EDuration="49.400318916s" podCreationTimestamp="2026-01-22 09:54:16 +0000 UTC" firstStartedPulling="2026-01-22 09:54:21.878251111 +0000 UTC m=+134.321881378" lastFinishedPulling="2026-01-22 09:54:54.224832201 +0000 UTC m=+166.668462468" observedRunningTime="2026-01-22 09:55:05.398054551 +0000 UTC m=+177.841684818" watchObservedRunningTime="2026-01-22 09:55:05.400318916 +0000 UTC m=+177.843949183" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.469256 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.579770 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.580963 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.590362 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.590435 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.610008 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.727251 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:55:05 crc kubenswrapper[5101]: I0122 09:55:05.741298 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:55:06 crc kubenswrapper[5101]: I0122 09:55:06.415589 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:55:06 crc kubenswrapper[5101]: I0122 09:55:06.416017 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:55:06 crc kubenswrapper[5101]: I0122 09:55:06.488405 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:55:06 crc kubenswrapper[5101]: I0122 09:55:06.488943 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:55:06 crc kubenswrapper[5101]: I0122 09:55:06.489211 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:55:06 crc kubenswrapper[5101]: I0122 09:55:06.610131 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:55:06 crc kubenswrapper[5101]: I0122 09:55:06.685012 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:55:06 crc kubenswrapper[5101]: I0122 09:55:06.685078 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:55:06 crc kubenswrapper[5101]: I0122 09:55:06.846989 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:55:07 crc kubenswrapper[5101]: I0122 09:55:07.103473 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:55:07 crc kubenswrapper[5101]: I0122 09:55:07.103560 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:55:07 crc kubenswrapper[5101]: I0122 09:55:07.113552 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:55:07 crc kubenswrapper[5101]: I0122 09:55:07.113620 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:55:07 crc kubenswrapper[5101]: I0122 09:55:07.142594 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:55:07 crc kubenswrapper[5101]: I0122 09:55:07.142683 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:55:07 crc kubenswrapper[5101]: I0122 09:55:07.514209 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:55:07 crc kubenswrapper[5101]: I0122 09:55:07.517634 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hdkwz"] Jan 22 09:55:07 crc kubenswrapper[5101]: I0122 09:55:07.563134 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:55:08 crc kubenswrapper[5101]: I0122 09:55:08.207074 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dc6g7" podUID="6d1ac98b-01eb-4125-837f-28a4429c09c6" containerName="registry-server" probeResult="failure" output=< Jan 22 09:55:08 crc kubenswrapper[5101]: timeout: failed to connect service ":50051" within 1s Jan 22 09:55:08 crc kubenswrapper[5101]: > Jan 22 09:55:08 crc kubenswrapper[5101]: I0122 09:55:08.210966 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7t7v9" podUID="2d87939b-ab96-41d5-ad67-0b52de7b0613" containerName="registry-server" probeResult="failure" output=< Jan 22 09:55:08 crc kubenswrapper[5101]: timeout: failed to connect service ":50051" within 1s Jan 22 09:55:08 crc kubenswrapper[5101]: > Jan 22 09:55:09 crc kubenswrapper[5101]: I0122 09:55:09.486990 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bdgc"] Jan 22 09:55:09 crc kubenswrapper[5101]: I0122 09:55:09.487353 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4bdgc" podUID="de72e1f1-e5ac-4a87-9b4b-aa2c16527255" containerName="registry-server" containerID="cri-o://04549feb994c2b74dc069466d54b7ab3259848b8ec705756ef416f0778f892f8" gracePeriod=2 Jan 22 09:55:09 crc kubenswrapper[5101]: I0122 09:55:09.487978 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hdkwz" podUID="fc21b80b-c600-46ec-b79a-8988ef57da90" containerName="registry-server" containerID="cri-o://2e9903f352101c461b64723325ba12c9bd801210aa8c0aa1bb1a2a3fc7d2e0b6" gracePeriod=2 Jan 22 09:55:10 crc kubenswrapper[5101]: I0122 09:55:10.496397 5101 generic.go:358] "Generic (PLEG): container finished" podID="de72e1f1-e5ac-4a87-9b4b-aa2c16527255" containerID="04549feb994c2b74dc069466d54b7ab3259848b8ec705756ef416f0778f892f8" exitCode=0 Jan 22 09:55:10 crc kubenswrapper[5101]: I0122 09:55:10.496802 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdgc" event={"ID":"de72e1f1-e5ac-4a87-9b4b-aa2c16527255","Type":"ContainerDied","Data":"04549feb994c2b74dc069466d54b7ab3259848b8ec705756ef416f0778f892f8"} Jan 22 09:55:11 crc kubenswrapper[5101]: I0122 09:55:11.270623 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzkjb"] Jan 22 09:55:11 crc kubenswrapper[5101]: I0122 09:55:11.271239 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lzkjb" podUID="21c1a591-9051-4ab4-883b-c6a2cf1aecff" containerName="registry-server" containerID="cri-o://71609ad87fefdf289ea2ee2b1f0125b35dd8ec1ce52d7137527b29ead9c42c04" gracePeriod=2 Jan 22 09:55:12 crc kubenswrapper[5101]: I0122 09:55:12.108517 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:55:12 crc kubenswrapper[5101]: I0122 09:55:12.109822 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:55:12 crc kubenswrapper[5101]: I0122 09:55:12.509592 5101 generic.go:358] "Generic (PLEG): container finished" podID="fc21b80b-c600-46ec-b79a-8988ef57da90" containerID="2e9903f352101c461b64723325ba12c9bd801210aa8c0aa1bb1a2a3fc7d2e0b6" exitCode=0 Jan 22 09:55:12 crc kubenswrapper[5101]: I0122 09:55:12.509681 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdkwz" event={"ID":"fc21b80b-c600-46ec-b79a-8988ef57da90","Type":"ContainerDied","Data":"2e9903f352101c461b64723325ba12c9bd801210aa8c0aa1bb1a2a3fc7d2e0b6"} Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.305784 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.310114 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.330012 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-utilities\") pod \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\" (UID: \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\") " Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.330072 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4qlw\" (UniqueName: \"kubernetes.io/projected/fc21b80b-c600-46ec-b79a-8988ef57da90-kube-api-access-w4qlw\") pod \"fc21b80b-c600-46ec-b79a-8988ef57da90\" (UID: \"fc21b80b-c600-46ec-b79a-8988ef57da90\") " Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.330173 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-catalog-content\") pod \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\" (UID: \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\") " Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.330227 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc21b80b-c600-46ec-b79a-8988ef57da90-utilities\") pod \"fc21b80b-c600-46ec-b79a-8988ef57da90\" (UID: \"fc21b80b-c600-46ec-b79a-8988ef57da90\") " Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.330254 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc21b80b-c600-46ec-b79a-8988ef57da90-catalog-content\") pod \"fc21b80b-c600-46ec-b79a-8988ef57da90\" (UID: \"fc21b80b-c600-46ec-b79a-8988ef57da90\") " Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.330323 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfzbs\" (UniqueName: \"kubernetes.io/projected/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-kube-api-access-zfzbs\") pod \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\" (UID: \"de72e1f1-e5ac-4a87-9b4b-aa2c16527255\") " Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.331655 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-utilities" (OuterVolumeSpecName: "utilities") pod "de72e1f1-e5ac-4a87-9b4b-aa2c16527255" (UID: "de72e1f1-e5ac-4a87-9b4b-aa2c16527255"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.332330 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc21b80b-c600-46ec-b79a-8988ef57da90-utilities" (OuterVolumeSpecName: "utilities") pod "fc21b80b-c600-46ec-b79a-8988ef57da90" (UID: "fc21b80b-c600-46ec-b79a-8988ef57da90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.338653 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-kube-api-access-zfzbs" (OuterVolumeSpecName: "kube-api-access-zfzbs") pod "de72e1f1-e5ac-4a87-9b4b-aa2c16527255" (UID: "de72e1f1-e5ac-4a87-9b4b-aa2c16527255"). InnerVolumeSpecName "kube-api-access-zfzbs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.339650 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc21b80b-c600-46ec-b79a-8988ef57da90-kube-api-access-w4qlw" (OuterVolumeSpecName: "kube-api-access-w4qlw") pod "fc21b80b-c600-46ec-b79a-8988ef57da90" (UID: "fc21b80b-c600-46ec-b79a-8988ef57da90"). InnerVolumeSpecName "kube-api-access-w4qlw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.366760 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de72e1f1-e5ac-4a87-9b4b-aa2c16527255" (UID: "de72e1f1-e5ac-4a87-9b4b-aa2c16527255"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.395212 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc21b80b-c600-46ec-b79a-8988ef57da90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc21b80b-c600-46ec-b79a-8988ef57da90" (UID: "fc21b80b-c600-46ec-b79a-8988ef57da90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.431414 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.431498 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4qlw\" (UniqueName: \"kubernetes.io/projected/fc21b80b-c600-46ec-b79a-8988ef57da90-kube-api-access-w4qlw\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.431509 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.431518 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc21b80b-c600-46ec-b79a-8988ef57da90-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.431526 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc21b80b-c600-46ec-b79a-8988ef57da90-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.431537 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zfzbs\" (UniqueName: \"kubernetes.io/projected/de72e1f1-e5ac-4a87-9b4b-aa2c16527255-kube-api-access-zfzbs\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.532196 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bdgc" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.532220 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdgc" event={"ID":"de72e1f1-e5ac-4a87-9b4b-aa2c16527255","Type":"ContainerDied","Data":"e6d088bd4edcd9bb3121b7d5fa58b67efca2f7921089c3017c7ea71d386091d1"} Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.532284 5101 scope.go:117] "RemoveContainer" containerID="04549feb994c2b74dc069466d54b7ab3259848b8ec705756ef416f0778f892f8" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.539601 5101 generic.go:358] "Generic (PLEG): container finished" podID="21c1a591-9051-4ab4-883b-c6a2cf1aecff" containerID="71609ad87fefdf289ea2ee2b1f0125b35dd8ec1ce52d7137527b29ead9c42c04" exitCode=0 Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.539697 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzkjb" event={"ID":"21c1a591-9051-4ab4-883b-c6a2cf1aecff","Type":"ContainerDied","Data":"71609ad87fefdf289ea2ee2b1f0125b35dd8ec1ce52d7137527b29ead9c42c04"} Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.542238 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdkwz" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.542238 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdkwz" event={"ID":"fc21b80b-c600-46ec-b79a-8988ef57da90","Type":"ContainerDied","Data":"7277c821d4f7c031cc057356780994dcb36a8c8c01a886c04cf089b492c6725e"} Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.552612 5101 scope.go:117] "RemoveContainer" containerID="4cd5c5382b001db821961f720b87c45e966270815dd2a2cc5636b6a6600a5028" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.567137 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bdgc"] Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.569788 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4bdgc"] Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.584575 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hdkwz"] Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.584660 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hdkwz"] Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.591120 5101 scope.go:117] "RemoveContainer" containerID="b543889f92a8c81a9153f94ad1f04c8cc1a550e9246007f97c057ecd17a92cdf" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.607453 5101 scope.go:117] "RemoveContainer" containerID="2e9903f352101c461b64723325ba12c9bd801210aa8c0aa1bb1a2a3fc7d2e0b6" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.621802 5101 scope.go:117] "RemoveContainer" containerID="f7608676d31883cba846883a7a8baa0a1d52539413582cc0f0858953e1d8adb0" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.639507 5101 scope.go:117] "RemoveContainer" containerID="55dee6045d8b53613a35e4d1754e6f1e1d33e396d6c37938c0d2348adbb75e4c" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.848373 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.938024 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21c1a591-9051-4ab4-883b-c6a2cf1aecff-catalog-content\") pod \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\" (UID: \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\") " Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.938135 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrhv6\" (UniqueName: \"kubernetes.io/projected/21c1a591-9051-4ab4-883b-c6a2cf1aecff-kube-api-access-nrhv6\") pod \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\" (UID: \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\") " Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.938234 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21c1a591-9051-4ab4-883b-c6a2cf1aecff-utilities\") pod \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\" (UID: \"21c1a591-9051-4ab4-883b-c6a2cf1aecff\") " Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.939646 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21c1a591-9051-4ab4-883b-c6a2cf1aecff-utilities" (OuterVolumeSpecName: "utilities") pod "21c1a591-9051-4ab4-883b-c6a2cf1aecff" (UID: "21c1a591-9051-4ab4-883b-c6a2cf1aecff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.943303 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c1a591-9051-4ab4-883b-c6a2cf1aecff-kube-api-access-nrhv6" (OuterVolumeSpecName: "kube-api-access-nrhv6") pod "21c1a591-9051-4ab4-883b-c6a2cf1aecff" (UID: "21c1a591-9051-4ab4-883b-c6a2cf1aecff"). InnerVolumeSpecName "kube-api-access-nrhv6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:55:15 crc kubenswrapper[5101]: I0122 09:55:15.956358 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21c1a591-9051-4ab4-883b-c6a2cf1aecff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21c1a591-9051-4ab4-883b-c6a2cf1aecff" (UID: "21c1a591-9051-4ab4-883b-c6a2cf1aecff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:55:16 crc kubenswrapper[5101]: I0122 09:55:16.047268 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21c1a591-9051-4ab4-883b-c6a2cf1aecff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:16 crc kubenswrapper[5101]: I0122 09:55:16.047324 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nrhv6\" (UniqueName: \"kubernetes.io/projected/21c1a591-9051-4ab4-883b-c6a2cf1aecff-kube-api-access-nrhv6\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:16 crc kubenswrapper[5101]: I0122 09:55:16.047341 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21c1a591-9051-4ab4-883b-c6a2cf1aecff-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:16 crc kubenswrapper[5101]: I0122 09:55:16.536787 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de72e1f1-e5ac-4a87-9b4b-aa2c16527255" path="/var/lib/kubelet/pods/de72e1f1-e5ac-4a87-9b4b-aa2c16527255/volumes" Jan 22 09:55:16 crc kubenswrapper[5101]: I0122 09:55:16.537913 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc21b80b-c600-46ec-b79a-8988ef57da90" path="/var/lib/kubelet/pods/fc21b80b-c600-46ec-b79a-8988ef57da90/volumes" Jan 22 09:55:16 crc kubenswrapper[5101]: I0122 09:55:16.558791 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzkjb" Jan 22 09:55:16 crc kubenswrapper[5101]: I0122 09:55:16.558817 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzkjb" event={"ID":"21c1a591-9051-4ab4-883b-c6a2cf1aecff","Type":"ContainerDied","Data":"c205c8322925fa51094200f4d041e2c5cc7e1ce05d6e8a591fee5f1f9c639f75"} Jan 22 09:55:16 crc kubenswrapper[5101]: I0122 09:55:16.558870 5101 scope.go:117] "RemoveContainer" containerID="71609ad87fefdf289ea2ee2b1f0125b35dd8ec1ce52d7137527b29ead9c42c04" Jan 22 09:55:16 crc kubenswrapper[5101]: I0122 09:55:16.578441 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzkjb"] Jan 22 09:55:16 crc kubenswrapper[5101]: I0122 09:55:16.582015 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzkjb"] Jan 22 09:55:16 crc kubenswrapper[5101]: I0122 09:55:16.582353 5101 scope.go:117] "RemoveContainer" containerID="e24d1e71d6b4c45531d7bcd0c199a7cb64360495e2390d41a434e6937915a19d" Jan 22 09:55:16 crc kubenswrapper[5101]: I0122 09:55:16.617834 5101 scope.go:117] "RemoveContainer" containerID="0ca74f2740090f5fa5f8275a8863a98b886ca2e94f58307fa6eb5427f71441f1" Jan 22 09:55:17 crc kubenswrapper[5101]: I0122 09:55:17.086825 5101 patch_prober.go:28] interesting pod/downloads-747b44746d-w2759 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:55:17 crc kubenswrapper[5101]: I0122 09:55:17.086908 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-w2759" podUID="ada11655-156b-4b1e-ad19-8391c89c8e6b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:55:17 crc kubenswrapper[5101]: I0122 09:55:17.090561 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:55:17 crc kubenswrapper[5101]: I0122 09:55:17.215526 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:55:17 crc kubenswrapper[5101]: I0122 09:55:17.227896 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:55:17 crc kubenswrapper[5101]: I0122 09:55:17.263413 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:55:17 crc kubenswrapper[5101]: I0122 09:55:17.280917 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:55:18 crc kubenswrapper[5101]: I0122 09:55:18.541367 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c1a591-9051-4ab4-883b-c6a2cf1aecff" path="/var/lib/kubelet/pods/21c1a591-9051-4ab4-883b-c6a2cf1aecff/volumes" Jan 22 09:55:20 crc kubenswrapper[5101]: I0122 09:55:20.671554 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7t7v9"] Jan 22 09:55:20 crc kubenswrapper[5101]: I0122 09:55:20.672113 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7t7v9" podUID="2d87939b-ab96-41d5-ad67-0b52de7b0613" containerName="registry-server" containerID="cri-o://fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f" gracePeriod=2 Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.260650 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.364546 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d87939b-ab96-41d5-ad67-0b52de7b0613-utilities" (OuterVolumeSpecName: "utilities") pod "2d87939b-ab96-41d5-ad67-0b52de7b0613" (UID: "2d87939b-ab96-41d5-ad67-0b52de7b0613"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.362490 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d87939b-ab96-41d5-ad67-0b52de7b0613-utilities\") pod \"2d87939b-ab96-41d5-ad67-0b52de7b0613\" (UID: \"2d87939b-ab96-41d5-ad67-0b52de7b0613\") " Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.364734 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d87939b-ab96-41d5-ad67-0b52de7b0613-catalog-content\") pod \"2d87939b-ab96-41d5-ad67-0b52de7b0613\" (UID: \"2d87939b-ab96-41d5-ad67-0b52de7b0613\") " Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.364819 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqt9r\" (UniqueName: \"kubernetes.io/projected/2d87939b-ab96-41d5-ad67-0b52de7b0613-kube-api-access-hqt9r\") pod \"2d87939b-ab96-41d5-ad67-0b52de7b0613\" (UID: \"2d87939b-ab96-41d5-ad67-0b52de7b0613\") " Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.365345 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d87939b-ab96-41d5-ad67-0b52de7b0613-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.372824 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d87939b-ab96-41d5-ad67-0b52de7b0613-kube-api-access-hqt9r" (OuterVolumeSpecName: "kube-api-access-hqt9r") pod "2d87939b-ab96-41d5-ad67-0b52de7b0613" (UID: "2d87939b-ab96-41d5-ad67-0b52de7b0613"). InnerVolumeSpecName "kube-api-access-hqt9r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.466696 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hqt9r\" (UniqueName: \"kubernetes.io/projected/2d87939b-ab96-41d5-ad67-0b52de7b0613-kube-api-access-hqt9r\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.483737 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d87939b-ab96-41d5-ad67-0b52de7b0613-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d87939b-ab96-41d5-ad67-0b52de7b0613" (UID: "2d87939b-ab96-41d5-ad67-0b52de7b0613"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.568050 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d87939b-ab96-41d5-ad67-0b52de7b0613-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.672572 5101 generic.go:358] "Generic (PLEG): container finished" podID="2d87939b-ab96-41d5-ad67-0b52de7b0613" containerID="fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f" exitCode=0 Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.672758 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t7v9" event={"ID":"2d87939b-ab96-41d5-ad67-0b52de7b0613","Type":"ContainerDied","Data":"fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f"} Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.672795 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t7v9" event={"ID":"2d87939b-ab96-41d5-ad67-0b52de7b0613","Type":"ContainerDied","Data":"82043d34100b1e6b8f26cb0a84e73b283bcd01e5137f4520c2fe816d44314648"} Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.672818 5101 scope.go:117] "RemoveContainer" containerID="fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.672999 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7t7v9" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.694172 5101 scope.go:117] "RemoveContainer" containerID="03afc534f78a6ab5bc62c348a5fa466a8b30450e4c09c96b2de3408991679404" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.702659 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7t7v9"] Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.705903 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7t7v9"] Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.724327 5101 scope.go:117] "RemoveContainer" containerID="4f3123342ea3574ac817957d74d666ea9b4f49e1e48234fd351f74d5f18e68b9" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.743205 5101 scope.go:117] "RemoveContainer" containerID="fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f" Jan 22 09:55:21 crc kubenswrapper[5101]: E0122 09:55:21.744973 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f\": container with ID starting with fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f not found: ID does not exist" containerID="fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.745013 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f"} err="failed to get container status \"fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f\": rpc error: code = NotFound desc = could not find container \"fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f\": container with ID starting with fb58931778d4ad61d9cb43e967c0a14c95458476645bba84e07bcc23216a767f not found: ID does not exist" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.745056 5101 scope.go:117] "RemoveContainer" containerID="03afc534f78a6ab5bc62c348a5fa466a8b30450e4c09c96b2de3408991679404" Jan 22 09:55:21 crc kubenswrapper[5101]: E0122 09:55:21.745396 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03afc534f78a6ab5bc62c348a5fa466a8b30450e4c09c96b2de3408991679404\": container with ID starting with 03afc534f78a6ab5bc62c348a5fa466a8b30450e4c09c96b2de3408991679404 not found: ID does not exist" containerID="03afc534f78a6ab5bc62c348a5fa466a8b30450e4c09c96b2de3408991679404" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.745446 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03afc534f78a6ab5bc62c348a5fa466a8b30450e4c09c96b2de3408991679404"} err="failed to get container status \"03afc534f78a6ab5bc62c348a5fa466a8b30450e4c09c96b2de3408991679404\": rpc error: code = NotFound desc = could not find container \"03afc534f78a6ab5bc62c348a5fa466a8b30450e4c09c96b2de3408991679404\": container with ID starting with 03afc534f78a6ab5bc62c348a5fa466a8b30450e4c09c96b2de3408991679404 not found: ID does not exist" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.745467 5101 scope.go:117] "RemoveContainer" containerID="4f3123342ea3574ac817957d74d666ea9b4f49e1e48234fd351f74d5f18e68b9" Jan 22 09:55:21 crc kubenswrapper[5101]: E0122 09:55:21.745899 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f3123342ea3574ac817957d74d666ea9b4f49e1e48234fd351f74d5f18e68b9\": container with ID starting with 4f3123342ea3574ac817957d74d666ea9b4f49e1e48234fd351f74d5f18e68b9 not found: ID does not exist" containerID="4f3123342ea3574ac817957d74d666ea9b4f49e1e48234fd351f74d5f18e68b9" Jan 22 09:55:21 crc kubenswrapper[5101]: I0122 09:55:21.745955 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f3123342ea3574ac817957d74d666ea9b4f49e1e48234fd351f74d5f18e68b9"} err="failed to get container status \"4f3123342ea3574ac817957d74d666ea9b4f49e1e48234fd351f74d5f18e68b9\": rpc error: code = NotFound desc = could not find container \"4f3123342ea3574ac817957d74d666ea9b4f49e1e48234fd351f74d5f18e68b9\": container with ID starting with 4f3123342ea3574ac817957d74d666ea9b4f49e1e48234fd351f74d5f18e68b9 not found: ID does not exist" Jan 22 09:55:22 crc kubenswrapper[5101]: I0122 09:55:22.537676 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d87939b-ab96-41d5-ad67-0b52de7b0613" path="/var/lib/kubelet/pods/2d87939b-ab96-41d5-ad67-0b52de7b0613/volumes" Jan 22 09:55:27 crc kubenswrapper[5101]: I0122 09:55:27.088581 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-w2759" Jan 22 09:55:36 crc kubenswrapper[5101]: I0122 09:55:36.897673 5101 ???:1] "http: TLS handshake error from 192.168.126.11:52314: no serving certificate available for the kubelet" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.051760 5101 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.052949 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc21b80b-c600-46ec-b79a-8988ef57da90" containerName="extract-content" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.052963 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc21b80b-c600-46ec-b79a-8988ef57da90" containerName="extract-content" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.052974 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="21c1a591-9051-4ab4-883b-c6a2cf1aecff" containerName="extract-utilities" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.052979 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c1a591-9051-4ab4-883b-c6a2cf1aecff" containerName="extract-utilities" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.052988 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="21c1a591-9051-4ab4-883b-c6a2cf1aecff" containerName="extract-content" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.052994 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c1a591-9051-4ab4-883b-c6a2cf1aecff" containerName="extract-content" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053012 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d87939b-ab96-41d5-ad67-0b52de7b0613" containerName="registry-server" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053017 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d87939b-ab96-41d5-ad67-0b52de7b0613" containerName="registry-server" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053030 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de72e1f1-e5ac-4a87-9b4b-aa2c16527255" containerName="registry-server" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053035 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="de72e1f1-e5ac-4a87-9b4b-aa2c16527255" containerName="registry-server" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053046 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc21b80b-c600-46ec-b79a-8988ef57da90" containerName="extract-utilities" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053051 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc21b80b-c600-46ec-b79a-8988ef57da90" containerName="extract-utilities" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053060 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d87939b-ab96-41d5-ad67-0b52de7b0613" containerName="extract-content" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053065 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d87939b-ab96-41d5-ad67-0b52de7b0613" containerName="extract-content" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053072 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc21b80b-c600-46ec-b79a-8988ef57da90" containerName="registry-server" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053078 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc21b80b-c600-46ec-b79a-8988ef57da90" containerName="registry-server" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053089 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab7ec671-bb9f-4656-9f0a-38de344d05c8" containerName="pruner" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053094 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7ec671-bb9f-4656-9f0a-38de344d05c8" containerName="pruner" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053106 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="21c1a591-9051-4ab4-883b-c6a2cf1aecff" containerName="registry-server" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053111 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c1a591-9051-4ab4-883b-c6a2cf1aecff" containerName="registry-server" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053122 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d87939b-ab96-41d5-ad67-0b52de7b0613" containerName="extract-utilities" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053127 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d87939b-ab96-41d5-ad67-0b52de7b0613" containerName="extract-utilities" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053133 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de72e1f1-e5ac-4a87-9b4b-aa2c16527255" containerName="extract-content" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053138 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="de72e1f1-e5ac-4a87-9b4b-aa2c16527255" containerName="extract-content" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053146 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de72e1f1-e5ac-4a87-9b4b-aa2c16527255" containerName="extract-utilities" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053151 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="de72e1f1-e5ac-4a87-9b4b-aa2c16527255" containerName="extract-utilities" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053233 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="21c1a591-9051-4ab4-883b-c6a2cf1aecff" containerName="registry-server" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053247 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="de72e1f1-e5ac-4a87-9b4b-aa2c16527255" containerName="registry-server" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053254 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="2d87939b-ab96-41d5-ad67-0b52de7b0613" containerName="registry-server" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053263 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="ab7ec671-bb9f-4656-9f0a-38de344d05c8" containerName="pruner" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.053270 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc21b80b-c600-46ec-b79a-8988ef57da90" containerName="registry-server" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.066811 5101 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.066857 5101 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067069 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067494 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5" gracePeriod=15 Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067517 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1e7f72084ec907351fbae268053f8b9ac43c75cf18b58bcc511edfc2afef474e" gracePeriod=15 Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067577 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21" gracePeriod=15 Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067567 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a" gracePeriod=15 Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067522 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f" gracePeriod=15 Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067559 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067719 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067742 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067752 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067763 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067770 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067797 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067815 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067834 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067839 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067856 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067861 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067889 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067895 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067907 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067913 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067923 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.067930 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.068153 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.068171 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.068180 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.068188 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.068197 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.068207 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.068217 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.068225 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.068373 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.068386 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.068538 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.073532 5101 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.088946 5101 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.102258 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.102317 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.102343 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.102365 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.102387 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.102406 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.102455 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.102481 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.102522 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.102553 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.115929 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204150 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204537 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204559 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204621 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204651 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204689 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204701 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204748 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204758 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204714 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204767 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204796 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204822 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204859 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204889 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204887 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204914 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.204788 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.205096 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.205131 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.407025 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:55:43 crc kubenswrapper[5101]: E0122 09:55:43.431080 5101 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d050403e0a752 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:55:43.42953557 +0000 UTC m=+215.873165837,LastTimestamp:2026-01-22 09:55:43.42953557 +0000 UTC m=+215.873165837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:55:43 crc kubenswrapper[5101]: E0122 09:55:43.556621 5101 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:43 crc kubenswrapper[5101]: E0122 09:55:43.556962 5101 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:43 crc kubenswrapper[5101]: E0122 09:55:43.557159 5101 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:43 crc kubenswrapper[5101]: E0122 09:55:43.557656 5101 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:43 crc kubenswrapper[5101]: E0122 09:55:43.557898 5101 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.557938 5101 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 09:55:43 crc kubenswrapper[5101]: E0122 09:55:43.558168 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Jan 22 09:55:43 crc kubenswrapper[5101]: E0122 09:55:43.759739 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.791274 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"e5e05f6f9a0c02136fde2f7724979cb69aa8b2956b143c3df138712b1b742dff"} Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.791319 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"e4bb91a8e7f5939e0e54079bd7ea083890295660082bf1769210505bb521a6b0"} Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.792080 5101 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.794214 5101 generic.go:358] "Generic (PLEG): container finished" podID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" containerID="bad55e13ee2180f037e10e50c92eb4654ed7ba79fe3eadddc52db3bf8691ce25" exitCode=0 Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.794282 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3","Type":"ContainerDied","Data":"bad55e13ee2180f037e10e50c92eb4654ed7ba79fe3eadddc52db3bf8691ce25"} Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.794963 5101 status_manager.go:895] "Failed to get status for pod" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.795259 5101 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.796861 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.798060 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.798618 5101 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="1e7f72084ec907351fbae268053f8b9ac43c75cf18b58bcc511edfc2afef474e" exitCode=0 Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.798644 5101 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f" exitCode=0 Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.798655 5101 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a" exitCode=0 Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.798665 5101 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21" exitCode=2 Jan 22 09:55:43 crc kubenswrapper[5101]: I0122 09:55:43.798746 5101 scope.go:117] "RemoveContainer" containerID="f35da6a4d24f5cb6a20a1ef1602d1ab151176cadd40be613de67b9f950888dcf" Jan 22 09:55:44 crc kubenswrapper[5101]: E0122 09:55:44.161211 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Jan 22 09:55:44 crc kubenswrapper[5101]: I0122 09:55:44.806831 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 09:55:44 crc kubenswrapper[5101]: E0122 09:55:44.961788 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.023229 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.023965 5101 status_manager.go:895] "Failed to get status for pod" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.024321 5101 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.133768 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-kubelet-dir\") pod \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\" (UID: \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\") " Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.133872 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-var-lock\") pod \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\" (UID: \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\") " Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.133901 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" (UID: "1fcb004c-8428-4e67-92f4-b6ab6cea8bf3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.133988 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-var-lock" (OuterVolumeSpecName: "var-lock") pod "1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" (UID: "1fcb004c-8428-4e67-92f4-b6ab6cea8bf3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.134002 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-kube-api-access\") pod \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\" (UID: \"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3\") " Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.135104 5101 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.135128 5101 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.157434 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" (UID: "1fcb004c-8428-4e67-92f4-b6ab6cea8bf3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.236272 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fcb004c-8428-4e67-92f4-b6ab6cea8bf3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.492414 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.493641 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.494612 5101 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.495141 5101 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.495498 5101 status_manager.go:895] "Failed to get status for pod" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.540244 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.540296 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.540329 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.540370 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.540397 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.540526 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.540544 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.540570 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.540948 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.541007 5101 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.541021 5101 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.541033 5101 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.542533 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.642257 5101 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.642309 5101 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.814059 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.814057 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"1fcb004c-8428-4e67-92f4-b6ab6cea8bf3","Type":"ContainerDied","Data":"62518e5d905f71fc025d21ec7cf2d75c0c7839e167c57680418494ca651de8b6"} Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.814199 5101 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62518e5d905f71fc025d21ec7cf2d75c0c7839e167c57680418494ca651de8b6" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.817212 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.817958 5101 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5" exitCode=0 Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.818047 5101 scope.go:117] "RemoveContainer" containerID="1e7f72084ec907351fbae268053f8b9ac43c75cf18b58bcc511edfc2afef474e" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.818233 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.831098 5101 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.831795 5101 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.832094 5101 status_manager.go:895] "Failed to get status for pod" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.837780 5101 scope.go:117] "RemoveContainer" containerID="82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.838885 5101 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.839170 5101 status_manager.go:895] "Failed to get status for pod" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.839456 5101 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.852879 5101 scope.go:117] "RemoveContainer" containerID="28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.866678 5101 scope.go:117] "RemoveContainer" containerID="cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.879558 5101 scope.go:117] "RemoveContainer" containerID="0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.894436 5101 scope.go:117] "RemoveContainer" containerID="9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.955992 5101 scope.go:117] "RemoveContainer" containerID="1e7f72084ec907351fbae268053f8b9ac43c75cf18b58bcc511edfc2afef474e" Jan 22 09:55:45 crc kubenswrapper[5101]: E0122 09:55:45.956372 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7f72084ec907351fbae268053f8b9ac43c75cf18b58bcc511edfc2afef474e\": container with ID starting with 1e7f72084ec907351fbae268053f8b9ac43c75cf18b58bcc511edfc2afef474e not found: ID does not exist" containerID="1e7f72084ec907351fbae268053f8b9ac43c75cf18b58bcc511edfc2afef474e" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.956413 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7f72084ec907351fbae268053f8b9ac43c75cf18b58bcc511edfc2afef474e"} err="failed to get container status \"1e7f72084ec907351fbae268053f8b9ac43c75cf18b58bcc511edfc2afef474e\": rpc error: code = NotFound desc = could not find container \"1e7f72084ec907351fbae268053f8b9ac43c75cf18b58bcc511edfc2afef474e\": container with ID starting with 1e7f72084ec907351fbae268053f8b9ac43c75cf18b58bcc511edfc2afef474e not found: ID does not exist" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.956506 5101 scope.go:117] "RemoveContainer" containerID="82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f" Jan 22 09:55:45 crc kubenswrapper[5101]: E0122 09:55:45.956988 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f\": container with ID starting with 82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f not found: ID does not exist" containerID="82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.957023 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f"} err="failed to get container status \"82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f\": rpc error: code = NotFound desc = could not find container \"82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f\": container with ID starting with 82cd6d68a7f0d9a06988d26362146324ed5913e568a078e6ee96a921c3c2902f not found: ID does not exist" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.957042 5101 scope.go:117] "RemoveContainer" containerID="28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a" Jan 22 09:55:45 crc kubenswrapper[5101]: E0122 09:55:45.957384 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a\": container with ID starting with 28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a not found: ID does not exist" containerID="28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.957444 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a"} err="failed to get container status \"28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a\": rpc error: code = NotFound desc = could not find container \"28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a\": container with ID starting with 28ba4b27dea9cb2623dbcbbe78ad78cb5dcde78b6e7d3f7427bdc35ce0ec8c4a not found: ID does not exist" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.957470 5101 scope.go:117] "RemoveContainer" containerID="cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21" Jan 22 09:55:45 crc kubenswrapper[5101]: E0122 09:55:45.960200 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21\": container with ID starting with cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21 not found: ID does not exist" containerID="cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.960233 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21"} err="failed to get container status \"cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21\": rpc error: code = NotFound desc = could not find container \"cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21\": container with ID starting with cbd4a2ce43ccb33422f7ef0aac19ab763cf60e5eda721fef7b865dd7ce5e2b21 not found: ID does not exist" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.960251 5101 scope.go:117] "RemoveContainer" containerID="0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5" Jan 22 09:55:45 crc kubenswrapper[5101]: E0122 09:55:45.960521 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5\": container with ID starting with 0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5 not found: ID does not exist" containerID="0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.960547 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5"} err="failed to get container status \"0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5\": rpc error: code = NotFound desc = could not find container \"0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5\": container with ID starting with 0afe0252f081fe052829ac472caabe73a3719a978f35ae3c59ef11a71599b0c5 not found: ID does not exist" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.960565 5101 scope.go:117] "RemoveContainer" containerID="9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326" Jan 22 09:55:45 crc kubenswrapper[5101]: E0122 09:55:45.960940 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326\": container with ID starting with 9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326 not found: ID does not exist" containerID="9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326" Jan 22 09:55:45 crc kubenswrapper[5101]: I0122 09:55:45.960977 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326"} err="failed to get container status \"9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326\": rpc error: code = NotFound desc = could not find container \"9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326\": container with ID starting with 9bcab9d709c20bebf249fc8191c8812d11c62cbcae153532fe96978750092326 not found: ID does not exist" Jan 22 09:55:46 crc kubenswrapper[5101]: E0122 09:55:46.328651 5101 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d050403e0a752 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:55:43.42953557 +0000 UTC m=+215.873165837,LastTimestamp:2026-01-22 09:55:43.42953557 +0000 UTC m=+215.873165837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:55:46 crc kubenswrapper[5101]: I0122 09:55:46.539451 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 22 09:55:46 crc kubenswrapper[5101]: E0122 09:55:46.563523 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Jan 22 09:55:48 crc kubenswrapper[5101]: I0122 09:55:48.532699 5101 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:48 crc kubenswrapper[5101]: I0122 09:55:48.534456 5101 status_manager.go:895] "Failed to get status for pod" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:49 crc kubenswrapper[5101]: E0122 09:55:49.765681 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="6.4s" Jan 22 09:55:56 crc kubenswrapper[5101]: E0122 09:55:56.167077 5101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="7s" Jan 22 09:55:56 crc kubenswrapper[5101]: E0122 09:55:56.330808 5101 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d050403e0a752 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:55:43.42953557 +0000 UTC m=+215.873165837,LastTimestamp:2026-01-22 09:55:43.42953557 +0000 UTC m=+215.873165837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:55:58 crc kubenswrapper[5101]: I0122 09:55:58.533360 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:58 crc kubenswrapper[5101]: I0122 09:55:58.533855 5101 status_manager.go:895] "Failed to get status for pod" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:58 crc kubenswrapper[5101]: I0122 09:55:58.535670 5101 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:58 crc kubenswrapper[5101]: I0122 09:55:58.536277 5101 status_manager.go:895] "Failed to get status for pod" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:58 crc kubenswrapper[5101]: I0122 09:55:58.540823 5101 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:58 crc kubenswrapper[5101]: I0122 09:55:58.553594 5101 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ebfa479c-e165-476d-bd0f-766a025a73ef" Jan 22 09:55:58 crc kubenswrapper[5101]: I0122 09:55:58.553624 5101 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ebfa479c-e165-476d-bd0f-766a025a73ef" Jan 22 09:55:58 crc kubenswrapper[5101]: E0122 09:55:58.554116 5101 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:58 crc kubenswrapper[5101]: I0122 09:55:58.554572 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:58 crc kubenswrapper[5101]: W0122 09:55:58.574633 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-ce96cb9c4d31f9b41017d3cc7c4c49bf934e6a89bfd070a12218d9dc0b8b13f3 WatchSource:0}: Error finding container ce96cb9c4d31f9b41017d3cc7c4c49bf934e6a89bfd070a12218d9dc0b8b13f3: Status 404 returned error can't find the container with id ce96cb9c4d31f9b41017d3cc7c4c49bf934e6a89bfd070a12218d9dc0b8b13f3 Jan 22 09:55:58 crc kubenswrapper[5101]: I0122 09:55:58.898124 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ce96cb9c4d31f9b41017d3cc7c4c49bf934e6a89bfd070a12218d9dc0b8b13f3"} Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.910211 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.910514 5101 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="1e0fdef7e068877da6a86fa0b15c2d38514c28f6645ddbfab0a7598309b595a9" exitCode=1 Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.910573 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"1e0fdef7e068877da6a86fa0b15c2d38514c28f6645ddbfab0a7598309b595a9"} Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.911544 5101 scope.go:117] "RemoveContainer" containerID="1e0fdef7e068877da6a86fa0b15c2d38514c28f6645ddbfab0a7598309b595a9" Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.911929 5101 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.912565 5101 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.913073 5101 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="eaba4d4c30589c12e51f25075c5006b4b477aaa6c97b2bd1559adc884e1519cb" exitCode=0 Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.913130 5101 status_manager.go:895] "Failed to get status for pod" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.913240 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"eaba4d4c30589c12e51f25075c5006b4b477aaa6c97b2bd1559adc884e1519cb"} Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.913412 5101 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ebfa479c-e165-476d-bd0f-766a025a73ef" Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.913462 5101 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ebfa479c-e165-476d-bd0f-766a025a73ef" Jan 22 09:55:59 crc kubenswrapper[5101]: E0122 09:55:59.913961 5101 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.914127 5101 status_manager.go:895] "Failed to get status for pod" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.914733 5101 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:55:59 crc kubenswrapper[5101]: I0122 09:55:59.915222 5101 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 22 09:56:00 crc kubenswrapper[5101]: I0122 09:56:00.924952 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 09:56:00 crc kubenswrapper[5101]: I0122 09:56:00.925709 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6ca1bcc5c4f8742815c0243f5588715404d7d2794fedb1d6b44ca6fff00ae60c"} Jan 22 09:56:00 crc kubenswrapper[5101]: I0122 09:56:00.931048 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b73befc264e3e5b96b31e296901461b595d8344244810e245f1c944bea47fa90"} Jan 22 09:56:00 crc kubenswrapper[5101]: I0122 09:56:00.931101 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"fb898c44a602464c490f0b87cc5e861b3d76afd3a98679aaad41359061acd4bb"} Jan 22 09:56:01 crc kubenswrapper[5101]: I0122 09:56:01.940762 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"9ab43d9191042f07f56ae7cf2a569a42522eb3f55f4b681468a51f775c423651"} Jan 22 09:56:01 crc kubenswrapper[5101]: I0122 09:56:01.940822 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"852cab48b83a4503020f7f7b1c880789f3a2705c9b9a5bbef8ed9bebacbc5f7e"} Jan 22 09:56:01 crc kubenswrapper[5101]: I0122 09:56:01.940834 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"f0175ad1bce1dfdffa36af68566cb37025220a8059eba8f1bdd8b74769073c44"} Jan 22 09:56:01 crc kubenswrapper[5101]: I0122 09:56:01.942232 5101 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ebfa479c-e165-476d-bd0f-766a025a73ef" Jan 22 09:56:01 crc kubenswrapper[5101]: I0122 09:56:01.942334 5101 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ebfa479c-e165-476d-bd0f-766a025a73ef" Jan 22 09:56:01 crc kubenswrapper[5101]: I0122 09:56:01.942278 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:56:02 crc kubenswrapper[5101]: I0122 09:56:02.373388 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:56:02 crc kubenswrapper[5101]: I0122 09:56:02.839984 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:56:02 crc kubenswrapper[5101]: I0122 09:56:02.840524 5101 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 09:56:02 crc kubenswrapper[5101]: I0122 09:56:02.840724 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 09:56:03 crc kubenswrapper[5101]: I0122 09:56:03.555124 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:56:03 crc kubenswrapper[5101]: I0122 09:56:03.555574 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:56:03 crc kubenswrapper[5101]: I0122 09:56:03.563110 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:56:06 crc kubenswrapper[5101]: I0122 09:56:06.953645 5101 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:56:06 crc kubenswrapper[5101]: I0122 09:56:06.954881 5101 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:56:06 crc kubenswrapper[5101]: I0122 09:56:06.971918 5101 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ebfa479c-e165-476d-bd0f-766a025a73ef" Jan 22 09:56:06 crc kubenswrapper[5101]: I0122 09:56:06.972207 5101 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ebfa479c-e165-476d-bd0f-766a025a73ef" Jan 22 09:56:06 crc kubenswrapper[5101]: I0122 09:56:06.976641 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:56:07 crc kubenswrapper[5101]: I0122 09:56:07.978078 5101 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ebfa479c-e165-476d-bd0f-766a025a73ef" Jan 22 09:56:07 crc kubenswrapper[5101]: I0122 09:56:07.978467 5101 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ebfa479c-e165-476d-bd0f-766a025a73ef" Jan 22 09:56:08 crc kubenswrapper[5101]: I0122 09:56:08.562811 5101 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="bdf89d3f-da23-496e-926c-e70900c8de68" Jan 22 09:56:12 crc kubenswrapper[5101]: I0122 09:56:12.841028 5101 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 09:56:12 crc kubenswrapper[5101]: I0122 09:56:12.841125 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 09:56:13 crc kubenswrapper[5101]: I0122 09:56:13.791824 5101 patch_prober.go:28] interesting pod/machine-config-daemon-m45mk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:56:13 crc kubenswrapper[5101]: I0122 09:56:13.792145 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" podUID="8450e755-f74e-492f-8007-24e3410a8926" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:56:16 crc kubenswrapper[5101]: I0122 09:56:16.694655 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 22 09:56:17 crc kubenswrapper[5101]: I0122 09:56:17.034724 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:17 crc kubenswrapper[5101]: I0122 09:56:17.084137 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 22 09:56:17 crc kubenswrapper[5101]: I0122 09:56:17.112671 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 22 09:56:17 crc kubenswrapper[5101]: I0122 09:56:17.361130 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 22 09:56:17 crc kubenswrapper[5101]: I0122 09:56:17.758684 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 22 09:56:17 crc kubenswrapper[5101]: I0122 09:56:17.788281 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 22 09:56:17 crc kubenswrapper[5101]: I0122 09:56:17.796035 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:17 crc kubenswrapper[5101]: I0122 09:56:17.999308 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 22 09:56:18 crc kubenswrapper[5101]: I0122 09:56:18.024119 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 09:56:18 crc kubenswrapper[5101]: I0122 09:56:18.069898 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 22 09:56:18 crc kubenswrapper[5101]: I0122 09:56:18.159666 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:18 crc kubenswrapper[5101]: I0122 09:56:18.230193 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 22 09:56:18 crc kubenswrapper[5101]: I0122 09:56:18.279520 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 09:56:18 crc kubenswrapper[5101]: I0122 09:56:18.464863 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 22 09:56:18 crc kubenswrapper[5101]: I0122 09:56:18.635173 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 22 09:56:19 crc kubenswrapper[5101]: I0122 09:56:19.276213 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 22 09:56:19 crc kubenswrapper[5101]: I0122 09:56:19.452206 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 22 09:56:19 crc kubenswrapper[5101]: I0122 09:56:19.544406 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 22 09:56:19 crc kubenswrapper[5101]: I0122 09:56:19.608385 5101 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 09:56:19 crc kubenswrapper[5101]: I0122 09:56:19.616071 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:19 crc kubenswrapper[5101]: I0122 09:56:19.679246 5101 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 09:56:19 crc kubenswrapper[5101]: I0122 09:56:19.798037 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:19 crc kubenswrapper[5101]: I0122 09:56:19.835411 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 22 09:56:19 crc kubenswrapper[5101]: I0122 09:56:19.893931 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:19 crc kubenswrapper[5101]: I0122 09:56:19.965209 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 22 09:56:19 crc kubenswrapper[5101]: I0122 09:56:19.988222 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.023311 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.080570 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.112543 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.143992 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.251117 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.275611 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.303209 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.353030 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.362077 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.423413 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.487493 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.489067 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.578589 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.603550 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.605077 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.689754 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.758244 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.793063 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 22 09:56:20 crc kubenswrapper[5101]: I0122 09:56:20.829803 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.069865 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.145470 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.157769 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.208896 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.234976 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.253785 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.298750 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.301760 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.343766 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.411907 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.433295 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.442127 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.487456 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.498841 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.501350 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.549510 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.732448 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.737710 5101 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.826890 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 22 09:56:21 crc kubenswrapper[5101]: I0122 09:56:21.929054 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.169818 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.286817 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.322167 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.378629 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.380124 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.391773 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.472036 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.506236 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.618320 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.625457 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.668482 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.679371 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.841507 5101 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.841635 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.841721 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.842768 5101 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"6ca1bcc5c4f8742815c0243f5588715404d7d2794fedb1d6b44ca6fff00ae60c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.842907 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://6ca1bcc5c4f8742815c0243f5588715404d7d2794fedb1d6b44ca6fff00ae60c" gracePeriod=30 Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.881585 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:22 crc kubenswrapper[5101]: I0122 09:56:22.995611 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:23 crc kubenswrapper[5101]: I0122 09:56:23.205481 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 22 09:56:23 crc kubenswrapper[5101]: I0122 09:56:23.305177 5101 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 09:56:23 crc kubenswrapper[5101]: I0122 09:56:23.442494 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 22 09:56:23 crc kubenswrapper[5101]: I0122 09:56:23.558944 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 09:56:23 crc kubenswrapper[5101]: I0122 09:56:23.613370 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:23 crc kubenswrapper[5101]: I0122 09:56:23.685937 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 22 09:56:23 crc kubenswrapper[5101]: I0122 09:56:23.742683 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 22 09:56:23 crc kubenswrapper[5101]: I0122 09:56:23.805534 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 22 09:56:23 crc kubenswrapper[5101]: I0122 09:56:23.843007 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 22 09:56:23 crc kubenswrapper[5101]: I0122 09:56:23.874554 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 22 09:56:23 crc kubenswrapper[5101]: I0122 09:56:23.944766 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.053075 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.113696 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.215895 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.224073 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.292231 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.325513 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.325605 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.329693 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.459844 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.573375 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.670333 5101 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.727045 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.812703 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.834218 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.848223 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.876635 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.882488 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.937318 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.941610 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.965701 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 22 09:56:24 crc kubenswrapper[5101]: I0122 09:56:24.974035 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.005712 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.062502 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.096061 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.119860 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.144009 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.201671 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.331015 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.357113 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.421048 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.574753 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.677502 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.693973 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.706852 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.722235 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.746087 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.832672 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.894493 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.907019 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.912905 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:25 crc kubenswrapper[5101]: I0122 09:56:25.991370 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.017358 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.073856 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.157337 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.193855 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.292564 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.299489 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.319713 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.354125 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.404245 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.603902 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.674177 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.739965 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.824070 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.872213 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.888039 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.947058 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 22 09:56:26 crc kubenswrapper[5101]: I0122 09:56:26.975545 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.016552 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.056076 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.246406 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.275540 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.319113 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.332401 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.346792 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.360597 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.393956 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.415006 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.534737 5101 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.537188 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.537173516 podStartE2EDuration="44.537173516s" podCreationTimestamp="2026-01-22 09:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:56:06.680668725 +0000 UTC m=+239.124299002" watchObservedRunningTime="2026-01-22 09:56:27.537173516 +0000 UTC m=+259.980803783" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.539070 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.539122 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.544038 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.545020 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.560047 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.56003094 podStartE2EDuration="21.56003094s" podCreationTimestamp="2026-01-22 09:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:56:27.558978258 +0000 UTC m=+260.002608525" watchObservedRunningTime="2026-01-22 09:56:27.56003094 +0000 UTC m=+260.003661207" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.576496 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.585928 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.592438 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.607526 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.608397 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.611299 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.627722 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.780776 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.828562 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 22 09:56:27 crc kubenswrapper[5101]: I0122 09:56:27.895034 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.003184 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.042820 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.058084 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.146579 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.148849 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.163575 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.270876 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.472731 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.651761 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.706011 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.707377 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.812451 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.820439 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.838020 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.839314 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.932676 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:28 crc kubenswrapper[5101]: I0122 09:56:28.955384 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.001073 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.051860 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.054187 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.140895 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.233527 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.319880 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.339371 5101 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.339961 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://e5e05f6f9a0c02136fde2f7724979cb69aa8b2956b143c3df138712b1b742dff" gracePeriod=5 Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.451198 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.611145 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.711545 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.756024 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 22 09:56:29 crc kubenswrapper[5101]: I0122 09:56:29.989398 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.028968 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.041451 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.098470 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.197114 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.267783 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.319481 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.340779 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.395521 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.406135 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.422650 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.439176 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.532500 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.606978 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.633838 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.715874 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 22 09:56:30 crc kubenswrapper[5101]: I0122 09:56:30.740614 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 22 09:56:31 crc kubenswrapper[5101]: I0122 09:56:31.006044 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 22 09:56:31 crc kubenswrapper[5101]: I0122 09:56:31.443990 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 22 09:56:31 crc kubenswrapper[5101]: I0122 09:56:31.455169 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 22 09:56:32 crc kubenswrapper[5101]: I0122 09:56:32.102714 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 22 09:56:32 crc kubenswrapper[5101]: I0122 09:56:32.104446 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 22 09:56:32 crc kubenswrapper[5101]: I0122 09:56:32.122030 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 22 09:56:32 crc kubenswrapper[5101]: I0122 09:56:32.196944 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 09:56:32 crc kubenswrapper[5101]: I0122 09:56:32.724475 5101 ???:1] "http: TLS handshake error from 192.168.126.11:45154: no serving certificate available for the kubelet" Jan 22 09:56:33 crc kubenswrapper[5101]: I0122 09:56:33.041849 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.911010 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.911576 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.954494 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.954990 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.955177 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.955286 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.955099 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.955246 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.955342 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.955765 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.955826 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.956206 5101 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.956301 5101 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.956366 5101 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.956451 5101 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:34 crc kubenswrapper[5101]: I0122 09:56:34.965760 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:56:35 crc kubenswrapper[5101]: I0122 09:56:35.058138 5101 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:35 crc kubenswrapper[5101]: I0122 09:56:35.146954 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 22 09:56:35 crc kubenswrapper[5101]: I0122 09:56:35.147021 5101 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="e5e05f6f9a0c02136fde2f7724979cb69aa8b2956b143c3df138712b1b742dff" exitCode=137 Jan 22 09:56:35 crc kubenswrapper[5101]: I0122 09:56:35.147152 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:56:35 crc kubenswrapper[5101]: I0122 09:56:35.147210 5101 scope.go:117] "RemoveContainer" containerID="e5e05f6f9a0c02136fde2f7724979cb69aa8b2956b143c3df138712b1b742dff" Jan 22 09:56:35 crc kubenswrapper[5101]: I0122 09:56:35.164604 5101 scope.go:117] "RemoveContainer" containerID="e5e05f6f9a0c02136fde2f7724979cb69aa8b2956b143c3df138712b1b742dff" Jan 22 09:56:35 crc kubenswrapper[5101]: E0122 09:56:35.165083 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5e05f6f9a0c02136fde2f7724979cb69aa8b2956b143c3df138712b1b742dff\": container with ID starting with e5e05f6f9a0c02136fde2f7724979cb69aa8b2956b143c3df138712b1b742dff not found: ID does not exist" containerID="e5e05f6f9a0c02136fde2f7724979cb69aa8b2956b143c3df138712b1b742dff" Jan 22 09:56:35 crc kubenswrapper[5101]: I0122 09:56:35.165115 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e05f6f9a0c02136fde2f7724979cb69aa8b2956b143c3df138712b1b742dff"} err="failed to get container status \"e5e05f6f9a0c02136fde2f7724979cb69aa8b2956b143c3df138712b1b742dff\": rpc error: code = NotFound desc = could not find container \"e5e05f6f9a0c02136fde2f7724979cb69aa8b2956b143c3df138712b1b742dff\": container with ID starting with e5e05f6f9a0c02136fde2f7724979cb69aa8b2956b143c3df138712b1b742dff not found: ID does not exist" Jan 22 09:56:36 crc kubenswrapper[5101]: I0122 09:56:36.537444 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 22 09:56:36 crc kubenswrapper[5101]: I0122 09:56:36.537882 5101 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 22 09:56:36 crc kubenswrapper[5101]: I0122 09:56:36.547582 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:56:36 crc kubenswrapper[5101]: I0122 09:56:36.547627 5101 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="afe24d88-76ca-4e68-8489-54e221b512fd" Jan 22 09:56:36 crc kubenswrapper[5101]: I0122 09:56:36.550912 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:56:36 crc kubenswrapper[5101]: I0122 09:56:36.550964 5101 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="afe24d88-76ca-4e68-8489-54e221b512fd" Jan 22 09:56:40 crc kubenswrapper[5101]: I0122 09:56:40.215725 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.075741 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z79d9"] Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.078117 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z79d9" podUID="e788d99a-4b7e-4d84-bf22-394fb29a2382" containerName="registry-server" containerID="cri-o://35fa76b3d3f6e3daf6928e8e074aae3069c434cd14a0fc571d9b16f06fe71da9" gracePeriod=30 Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.084492 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p79nv"] Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.085096 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p79nv" podUID="7e8d5b04-69ec-44a1-adfe-7dfc917e4530" containerName="registry-server" containerID="cri-o://1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10" gracePeriod=30 Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.105054 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ss5t9"] Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.105381 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" podUID="43dfdef8-e150-4eba-b790-6c9a395fba76" containerName="marketplace-operator" containerID="cri-o://71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8" gracePeriod=30 Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.118439 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5s8n"] Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.118846 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k5s8n" podUID="0fa3648a-30f1-4fba-8830-a4c93ff9a88b" containerName="registry-server" containerID="cri-o://10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37" gracePeriod=30 Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.127537 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dc6g7"] Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.128004 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dc6g7" podUID="6d1ac98b-01eb-4125-837f-28a4429c09c6" containerName="registry-server" containerID="cri-o://bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2" gracePeriod=30 Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.207364 5101 generic.go:358] "Generic (PLEG): container finished" podID="e788d99a-4b7e-4d84-bf22-394fb29a2382" containerID="35fa76b3d3f6e3daf6928e8e074aae3069c434cd14a0fc571d9b16f06fe71da9" exitCode=0 Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.207689 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z79d9" event={"ID":"e788d99a-4b7e-4d84-bf22-394fb29a2382","Type":"ContainerDied","Data":"35fa76b3d3f6e3daf6928e8e074aae3069c434cd14a0fc571d9b16f06fe71da9"} Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.424438 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.476811 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.507872 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.533821 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.550692 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.577802 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43dfdef8-e150-4eba-b790-6c9a395fba76-tmp\") pod \"43dfdef8-e150-4eba-b790-6c9a395fba76\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.577858 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e788d99a-4b7e-4d84-bf22-394fb29a2382-utilities\") pod \"e788d99a-4b7e-4d84-bf22-394fb29a2382\" (UID: \"e788d99a-4b7e-4d84-bf22-394fb29a2382\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.577912 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-utilities\") pod \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\" (UID: \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.577943 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g6pl\" (UniqueName: \"kubernetes.io/projected/43dfdef8-e150-4eba-b790-6c9a395fba76-kube-api-access-7g6pl\") pod \"43dfdef8-e150-4eba-b790-6c9a395fba76\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.577984 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9588\" (UniqueName: \"kubernetes.io/projected/e788d99a-4b7e-4d84-bf22-394fb29a2382-kube-api-access-s9588\") pod \"e788d99a-4b7e-4d84-bf22-394fb29a2382\" (UID: \"e788d99a-4b7e-4d84-bf22-394fb29a2382\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.578025 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-catalog-content\") pod \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\" (UID: \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.578054 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43dfdef8-e150-4eba-b790-6c9a395fba76-marketplace-trusted-ca\") pod \"43dfdef8-e150-4eba-b790-6c9a395fba76\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.578084 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q6m5\" (UniqueName: \"kubernetes.io/projected/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-kube-api-access-7q6m5\") pod \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\" (UID: \"7e8d5b04-69ec-44a1-adfe-7dfc917e4530\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.578158 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/43dfdef8-e150-4eba-b790-6c9a395fba76-marketplace-operator-metrics\") pod \"43dfdef8-e150-4eba-b790-6c9a395fba76\" (UID: \"43dfdef8-e150-4eba-b790-6c9a395fba76\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.578198 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e788d99a-4b7e-4d84-bf22-394fb29a2382-catalog-content\") pod \"e788d99a-4b7e-4d84-bf22-394fb29a2382\" (UID: \"e788d99a-4b7e-4d84-bf22-394fb29a2382\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.580162 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43dfdef8-e150-4eba-b790-6c9a395fba76-tmp" (OuterVolumeSpecName: "tmp") pod "43dfdef8-e150-4eba-b790-6c9a395fba76" (UID: "43dfdef8-e150-4eba-b790-6c9a395fba76"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.581450 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-utilities" (OuterVolumeSpecName: "utilities") pod "7e8d5b04-69ec-44a1-adfe-7dfc917e4530" (UID: "7e8d5b04-69ec-44a1-adfe-7dfc917e4530"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.582088 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e788d99a-4b7e-4d84-bf22-394fb29a2382-utilities" (OuterVolumeSpecName: "utilities") pod "e788d99a-4b7e-4d84-bf22-394fb29a2382" (UID: "e788d99a-4b7e-4d84-bf22-394fb29a2382"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.585062 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e788d99a-4b7e-4d84-bf22-394fb29a2382-kube-api-access-s9588" (OuterVolumeSpecName: "kube-api-access-s9588") pod "e788d99a-4b7e-4d84-bf22-394fb29a2382" (UID: "e788d99a-4b7e-4d84-bf22-394fb29a2382"). InnerVolumeSpecName "kube-api-access-s9588". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.585330 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43dfdef8-e150-4eba-b790-6c9a395fba76-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "43dfdef8-e150-4eba-b790-6c9a395fba76" (UID: "43dfdef8-e150-4eba-b790-6c9a395fba76"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.586079 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43dfdef8-e150-4eba-b790-6c9a395fba76-kube-api-access-7g6pl" (OuterVolumeSpecName: "kube-api-access-7g6pl") pod "43dfdef8-e150-4eba-b790-6c9a395fba76" (UID: "43dfdef8-e150-4eba-b790-6c9a395fba76"). InnerVolumeSpecName "kube-api-access-7g6pl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.586570 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-kube-api-access-7q6m5" (OuterVolumeSpecName: "kube-api-access-7q6m5") pod "7e8d5b04-69ec-44a1-adfe-7dfc917e4530" (UID: "7e8d5b04-69ec-44a1-adfe-7dfc917e4530"). InnerVolumeSpecName "kube-api-access-7q6m5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.590963 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43dfdef8-e150-4eba-b790-6c9a395fba76-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "43dfdef8-e150-4eba-b790-6c9a395fba76" (UID: "43dfdef8-e150-4eba-b790-6c9a395fba76"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.610202 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e788d99a-4b7e-4d84-bf22-394fb29a2382-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e788d99a-4b7e-4d84-bf22-394fb29a2382" (UID: "e788d99a-4b7e-4d84-bf22-394fb29a2382"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.634409 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e8d5b04-69ec-44a1-adfe-7dfc917e4530" (UID: "7e8d5b04-69ec-44a1-adfe-7dfc917e4530"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.679578 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgtnf\" (UniqueName: \"kubernetes.io/projected/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-kube-api-access-sgtnf\") pod \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\" (UID: \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.679684 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d1ac98b-01eb-4125-837f-28a4429c09c6-catalog-content\") pod \"6d1ac98b-01eb-4125-837f-28a4429c09c6\" (UID: \"6d1ac98b-01eb-4125-837f-28a4429c09c6\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.679742 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d1ac98b-01eb-4125-837f-28a4429c09c6-utilities\") pod \"6d1ac98b-01eb-4125-837f-28a4429c09c6\" (UID: \"6d1ac98b-01eb-4125-837f-28a4429c09c6\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.679779 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-utilities\") pod \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\" (UID: \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.679817 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-catalog-content\") pod \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\" (UID: \"0fa3648a-30f1-4fba-8830-a4c93ff9a88b\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.679834 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8mfj\" (UniqueName: \"kubernetes.io/projected/6d1ac98b-01eb-4125-837f-28a4429c09c6-kube-api-access-c8mfj\") pod \"6d1ac98b-01eb-4125-837f-28a4429c09c6\" (UID: \"6d1ac98b-01eb-4125-837f-28a4429c09c6\") " Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.680017 5101 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/43dfdef8-e150-4eba-b790-6c9a395fba76-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.680058 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e788d99a-4b7e-4d84-bf22-394fb29a2382-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.680067 5101 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43dfdef8-e150-4eba-b790-6c9a395fba76-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.680078 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e788d99a-4b7e-4d84-bf22-394fb29a2382-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.680087 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.680097 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7g6pl\" (UniqueName: \"kubernetes.io/projected/43dfdef8-e150-4eba-b790-6c9a395fba76-kube-api-access-7g6pl\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.680105 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s9588\" (UniqueName: \"kubernetes.io/projected/e788d99a-4b7e-4d84-bf22-394fb29a2382-kube-api-access-s9588\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.680114 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.680121 5101 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43dfdef8-e150-4eba-b790-6c9a395fba76-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.680130 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7q6m5\" (UniqueName: \"kubernetes.io/projected/7e8d5b04-69ec-44a1-adfe-7dfc917e4530-kube-api-access-7q6m5\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.681442 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-utilities" (OuterVolumeSpecName: "utilities") pod "0fa3648a-30f1-4fba-8830-a4c93ff9a88b" (UID: "0fa3648a-30f1-4fba-8830-a4c93ff9a88b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.681517 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d1ac98b-01eb-4125-837f-28a4429c09c6-utilities" (OuterVolumeSpecName: "utilities") pod "6d1ac98b-01eb-4125-837f-28a4429c09c6" (UID: "6d1ac98b-01eb-4125-837f-28a4429c09c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.683521 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d1ac98b-01eb-4125-837f-28a4429c09c6-kube-api-access-c8mfj" (OuterVolumeSpecName: "kube-api-access-c8mfj") pod "6d1ac98b-01eb-4125-837f-28a4429c09c6" (UID: "6d1ac98b-01eb-4125-837f-28a4429c09c6"). InnerVolumeSpecName "kube-api-access-c8mfj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.685476 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-kube-api-access-sgtnf" (OuterVolumeSpecName: "kube-api-access-sgtnf") pod "0fa3648a-30f1-4fba-8830-a4c93ff9a88b" (UID: "0fa3648a-30f1-4fba-8830-a4c93ff9a88b"). InnerVolumeSpecName "kube-api-access-sgtnf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.695186 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fa3648a-30f1-4fba-8830-a4c93ff9a88b" (UID: "0fa3648a-30f1-4fba-8830-a4c93ff9a88b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.777827 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d1ac98b-01eb-4125-837f-28a4429c09c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d1ac98b-01eb-4125-837f-28a4429c09c6" (UID: "6d1ac98b-01eb-4125-837f-28a4429c09c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.781553 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d1ac98b-01eb-4125-837f-28a4429c09c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.781595 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d1ac98b-01eb-4125-837f-28a4429c09c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.781609 5101 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.781617 5101 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.781627 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c8mfj\" (UniqueName: \"kubernetes.io/projected/6d1ac98b-01eb-4125-837f-28a4429c09c6-kube-api-access-c8mfj\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.781641 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sgtnf\" (UniqueName: \"kubernetes.io/projected/0fa3648a-30f1-4fba-8830-a4c93ff9a88b-kube-api-access-sgtnf\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.791852 5101 patch_prober.go:28] interesting pod/machine-config-daemon-m45mk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:56:43 crc kubenswrapper[5101]: I0122 09:56:43.791944 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" podUID="8450e755-f74e-492f-8007-24e3410a8926" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.216294 5101 generic.go:358] "Generic (PLEG): container finished" podID="0fa3648a-30f1-4fba-8830-a4c93ff9a88b" containerID="10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37" exitCode=0 Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.216384 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5s8n" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.216384 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5s8n" event={"ID":"0fa3648a-30f1-4fba-8830-a4c93ff9a88b","Type":"ContainerDied","Data":"10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37"} Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.218098 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5s8n" event={"ID":"0fa3648a-30f1-4fba-8830-a4c93ff9a88b","Type":"ContainerDied","Data":"f9c55c10ad740e5b34ff14f482418309dd44919a1b30849e6341c2b71c4c3a84"} Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.218171 5101 scope.go:117] "RemoveContainer" containerID="10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.219720 5101 generic.go:358] "Generic (PLEG): container finished" podID="6d1ac98b-01eb-4125-837f-28a4429c09c6" containerID="bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2" exitCode=0 Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.219855 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc6g7" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.219865 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc6g7" event={"ID":"6d1ac98b-01eb-4125-837f-28a4429c09c6","Type":"ContainerDied","Data":"bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2"} Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.220408 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc6g7" event={"ID":"6d1ac98b-01eb-4125-837f-28a4429c09c6","Type":"ContainerDied","Data":"4fa00901525c3e04a548966ca7682d06a98452eb9275ec0414b0a638e5173ed8"} Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.222138 5101 generic.go:358] "Generic (PLEG): container finished" podID="43dfdef8-e150-4eba-b790-6c9a395fba76" containerID="71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8" exitCode=0 Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.222265 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" event={"ID":"43dfdef8-e150-4eba-b790-6c9a395fba76","Type":"ContainerDied","Data":"71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8"} Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.222289 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" event={"ID":"43dfdef8-e150-4eba-b790-6c9a395fba76","Type":"ContainerDied","Data":"908efd83b1d222e32b0b2de371f9ce287cd0d4bc529bf541b286886b5db91c19"} Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.222376 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-ss5t9" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.232467 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z79d9" event={"ID":"e788d99a-4b7e-4d84-bf22-394fb29a2382","Type":"ContainerDied","Data":"5fb0304b8c02221eae5486dc8de3d0a4f14b6636b7c78d834878d25f17c04cff"} Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.232644 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z79d9" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.235775 5101 generic.go:358] "Generic (PLEG): container finished" podID="7e8d5b04-69ec-44a1-adfe-7dfc917e4530" containerID="1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10" exitCode=0 Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.235820 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p79nv" event={"ID":"7e8d5b04-69ec-44a1-adfe-7dfc917e4530","Type":"ContainerDied","Data":"1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10"} Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.235843 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p79nv" event={"ID":"7e8d5b04-69ec-44a1-adfe-7dfc917e4530","Type":"ContainerDied","Data":"5200b5ec6f299642fe8d20435619bf53eebef9d2b1133952c8eb666800477ad7"} Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.235973 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p79nv" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.264036 5101 scope.go:117] "RemoveContainer" containerID="42426a318541962c802dc1ddf4d7606f54fc5ce7d9d2fffc71f9d3ab5717bd91" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.286443 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ss5t9"] Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.289200 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ss5t9"] Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.303680 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p79nv"] Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.307107 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p79nv"] Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.311546 5101 scope.go:117] "RemoveContainer" containerID="1919cc6ac56f52a061739d95ca7fd02a7d12bcb4fc765277c0593c44bba53584" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.318530 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5s8n"] Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.323567 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5s8n"] Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.331852 5101 scope.go:117] "RemoveContainer" containerID="10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37" Jan 22 09:56:44 crc kubenswrapper[5101]: E0122 09:56:44.332402 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37\": container with ID starting with 10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37 not found: ID does not exist" containerID="10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.332473 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37"} err="failed to get container status \"10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37\": rpc error: code = NotFound desc = could not find container \"10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37\": container with ID starting with 10052eb3778eb79243dd33158cb2e22eafa0df02a54163b3c40dd0aa3080ca37 not found: ID does not exist" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.332509 5101 scope.go:117] "RemoveContainer" containerID="42426a318541962c802dc1ddf4d7606f54fc5ce7d9d2fffc71f9d3ab5717bd91" Jan 22 09:56:44 crc kubenswrapper[5101]: E0122 09:56:44.332924 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42426a318541962c802dc1ddf4d7606f54fc5ce7d9d2fffc71f9d3ab5717bd91\": container with ID starting with 42426a318541962c802dc1ddf4d7606f54fc5ce7d9d2fffc71f9d3ab5717bd91 not found: ID does not exist" containerID="42426a318541962c802dc1ddf4d7606f54fc5ce7d9d2fffc71f9d3ab5717bd91" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.332955 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42426a318541962c802dc1ddf4d7606f54fc5ce7d9d2fffc71f9d3ab5717bd91"} err="failed to get container status \"42426a318541962c802dc1ddf4d7606f54fc5ce7d9d2fffc71f9d3ab5717bd91\": rpc error: code = NotFound desc = could not find container \"42426a318541962c802dc1ddf4d7606f54fc5ce7d9d2fffc71f9d3ab5717bd91\": container with ID starting with 42426a318541962c802dc1ddf4d7606f54fc5ce7d9d2fffc71f9d3ab5717bd91 not found: ID does not exist" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.332980 5101 scope.go:117] "RemoveContainer" containerID="1919cc6ac56f52a061739d95ca7fd02a7d12bcb4fc765277c0593c44bba53584" Jan 22 09:56:44 crc kubenswrapper[5101]: E0122 09:56:44.333479 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1919cc6ac56f52a061739d95ca7fd02a7d12bcb4fc765277c0593c44bba53584\": container with ID starting with 1919cc6ac56f52a061739d95ca7fd02a7d12bcb4fc765277c0593c44bba53584 not found: ID does not exist" containerID="1919cc6ac56f52a061739d95ca7fd02a7d12bcb4fc765277c0593c44bba53584" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.333536 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1919cc6ac56f52a061739d95ca7fd02a7d12bcb4fc765277c0593c44bba53584"} err="failed to get container status \"1919cc6ac56f52a061739d95ca7fd02a7d12bcb4fc765277c0593c44bba53584\": rpc error: code = NotFound desc = could not find container \"1919cc6ac56f52a061739d95ca7fd02a7d12bcb4fc765277c0593c44bba53584\": container with ID starting with 1919cc6ac56f52a061739d95ca7fd02a7d12bcb4fc765277c0593c44bba53584 not found: ID does not exist" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.333580 5101 scope.go:117] "RemoveContainer" containerID="bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.334319 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dc6g7"] Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.343309 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dc6g7"] Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.348014 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z79d9"] Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.351101 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z79d9"] Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.357304 5101 scope.go:117] "RemoveContainer" containerID="68895e911f831033fb5f7f4349e7afdbcddb4f327922ea4f860092f39a3fa598" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.373591 5101 scope.go:117] "RemoveContainer" containerID="3f2e04d23d98eec4c3dceb064094d1ec26f4492bfb5dbdb44cb959b7688a981b" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.392878 5101 scope.go:117] "RemoveContainer" containerID="bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2" Jan 22 09:56:44 crc kubenswrapper[5101]: E0122 09:56:44.393682 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2\": container with ID starting with bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2 not found: ID does not exist" containerID="bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.393864 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2"} err="failed to get container status \"bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2\": rpc error: code = NotFound desc = could not find container \"bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2\": container with ID starting with bb232c21b71f6c8f4f02f1341898059ea9c733cfd153191e67fff302ec8da3b2 not found: ID does not exist" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.393902 5101 scope.go:117] "RemoveContainer" containerID="68895e911f831033fb5f7f4349e7afdbcddb4f327922ea4f860092f39a3fa598" Jan 22 09:56:44 crc kubenswrapper[5101]: E0122 09:56:44.394416 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68895e911f831033fb5f7f4349e7afdbcddb4f327922ea4f860092f39a3fa598\": container with ID starting with 68895e911f831033fb5f7f4349e7afdbcddb4f327922ea4f860092f39a3fa598 not found: ID does not exist" containerID="68895e911f831033fb5f7f4349e7afdbcddb4f327922ea4f860092f39a3fa598" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.394489 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68895e911f831033fb5f7f4349e7afdbcddb4f327922ea4f860092f39a3fa598"} err="failed to get container status \"68895e911f831033fb5f7f4349e7afdbcddb4f327922ea4f860092f39a3fa598\": rpc error: code = NotFound desc = could not find container \"68895e911f831033fb5f7f4349e7afdbcddb4f327922ea4f860092f39a3fa598\": container with ID starting with 68895e911f831033fb5f7f4349e7afdbcddb4f327922ea4f860092f39a3fa598 not found: ID does not exist" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.394514 5101 scope.go:117] "RemoveContainer" containerID="3f2e04d23d98eec4c3dceb064094d1ec26f4492bfb5dbdb44cb959b7688a981b" Jan 22 09:56:44 crc kubenswrapper[5101]: E0122 09:56:44.394815 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2e04d23d98eec4c3dceb064094d1ec26f4492bfb5dbdb44cb959b7688a981b\": container with ID starting with 3f2e04d23d98eec4c3dceb064094d1ec26f4492bfb5dbdb44cb959b7688a981b not found: ID does not exist" containerID="3f2e04d23d98eec4c3dceb064094d1ec26f4492bfb5dbdb44cb959b7688a981b" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.394839 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2e04d23d98eec4c3dceb064094d1ec26f4492bfb5dbdb44cb959b7688a981b"} err="failed to get container status \"3f2e04d23d98eec4c3dceb064094d1ec26f4492bfb5dbdb44cb959b7688a981b\": rpc error: code = NotFound desc = could not find container \"3f2e04d23d98eec4c3dceb064094d1ec26f4492bfb5dbdb44cb959b7688a981b\": container with ID starting with 3f2e04d23d98eec4c3dceb064094d1ec26f4492bfb5dbdb44cb959b7688a981b not found: ID does not exist" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.394854 5101 scope.go:117] "RemoveContainer" containerID="71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.409458 5101 scope.go:117] "RemoveContainer" containerID="71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8" Jan 22 09:56:44 crc kubenswrapper[5101]: E0122 09:56:44.410104 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8\": container with ID starting with 71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8 not found: ID does not exist" containerID="71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.410178 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8"} err="failed to get container status \"71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8\": rpc error: code = NotFound desc = could not find container \"71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8\": container with ID starting with 71c214168bfd03f7395f277ecacf94e8677964c39d03c04c863da7b77f4de2b8 not found: ID does not exist" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.410274 5101 scope.go:117] "RemoveContainer" containerID="35fa76b3d3f6e3daf6928e8e074aae3069c434cd14a0fc571d9b16f06fe71da9" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.424861 5101 scope.go:117] "RemoveContainer" containerID="b78819dfd5b403e1060fdb15ec07a0325bf368a96e076deef6e1bf6dedabe85f" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.440634 5101 scope.go:117] "RemoveContainer" containerID="e3d3d41d1ae640acc6ebd63537d727b397638f3690b04a59938eebc62d8443c4" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.454863 5101 scope.go:117] "RemoveContainer" containerID="1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.470812 5101 scope.go:117] "RemoveContainer" containerID="f098741946d79f38a237ac42974cea31a43cd011f771513f637eb7b110779952" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.485525 5101 scope.go:117] "RemoveContainer" containerID="84ac8207e181009e7cd61d8c2057eb96d4548042c965feb0381c173900af7482" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.502272 5101 scope.go:117] "RemoveContainer" containerID="1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10" Jan 22 09:56:44 crc kubenswrapper[5101]: E0122 09:56:44.502930 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10\": container with ID starting with 1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10 not found: ID does not exist" containerID="1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.502978 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10"} err="failed to get container status \"1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10\": rpc error: code = NotFound desc = could not find container \"1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10\": container with ID starting with 1a80237d29ecdc5276b706e086bb271065f17d63b60d0bc7988eed74a0e9cc10 not found: ID does not exist" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.503014 5101 scope.go:117] "RemoveContainer" containerID="f098741946d79f38a237ac42974cea31a43cd011f771513f637eb7b110779952" Jan 22 09:56:44 crc kubenswrapper[5101]: E0122 09:56:44.503893 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f098741946d79f38a237ac42974cea31a43cd011f771513f637eb7b110779952\": container with ID starting with f098741946d79f38a237ac42974cea31a43cd011f771513f637eb7b110779952 not found: ID does not exist" containerID="f098741946d79f38a237ac42974cea31a43cd011f771513f637eb7b110779952" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.503945 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f098741946d79f38a237ac42974cea31a43cd011f771513f637eb7b110779952"} err="failed to get container status \"f098741946d79f38a237ac42974cea31a43cd011f771513f637eb7b110779952\": rpc error: code = NotFound desc = could not find container \"f098741946d79f38a237ac42974cea31a43cd011f771513f637eb7b110779952\": container with ID starting with f098741946d79f38a237ac42974cea31a43cd011f771513f637eb7b110779952 not found: ID does not exist" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.503976 5101 scope.go:117] "RemoveContainer" containerID="84ac8207e181009e7cd61d8c2057eb96d4548042c965feb0381c173900af7482" Jan 22 09:56:44 crc kubenswrapper[5101]: E0122 09:56:44.505791 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ac8207e181009e7cd61d8c2057eb96d4548042c965feb0381c173900af7482\": container with ID starting with 84ac8207e181009e7cd61d8c2057eb96d4548042c965feb0381c173900af7482 not found: ID does not exist" containerID="84ac8207e181009e7cd61d8c2057eb96d4548042c965feb0381c173900af7482" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.505838 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ac8207e181009e7cd61d8c2057eb96d4548042c965feb0381c173900af7482"} err="failed to get container status \"84ac8207e181009e7cd61d8c2057eb96d4548042c965feb0381c173900af7482\": rpc error: code = NotFound desc = could not find container \"84ac8207e181009e7cd61d8c2057eb96d4548042c965feb0381c173900af7482\": container with ID starting with 84ac8207e181009e7cd61d8c2057eb96d4548042c965feb0381c173900af7482 not found: ID does not exist" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.536126 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa3648a-30f1-4fba-8830-a4c93ff9a88b" path="/var/lib/kubelet/pods/0fa3648a-30f1-4fba-8830-a4c93ff9a88b/volumes" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.537209 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43dfdef8-e150-4eba-b790-6c9a395fba76" path="/var/lib/kubelet/pods/43dfdef8-e150-4eba-b790-6c9a395fba76/volumes" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.537665 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d1ac98b-01eb-4125-837f-28a4429c09c6" path="/var/lib/kubelet/pods/6d1ac98b-01eb-4125-837f-28a4429c09c6/volumes" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.538745 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e8d5b04-69ec-44a1-adfe-7dfc917e4530" path="/var/lib/kubelet/pods/7e8d5b04-69ec-44a1-adfe-7dfc917e4530/volumes" Jan 22 09:56:44 crc kubenswrapper[5101]: I0122 09:56:44.539468 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e788d99a-4b7e-4d84-bf22-394fb29a2382" path="/var/lib/kubelet/pods/e788d99a-4b7e-4d84-bf22-394fb29a2382/volumes" Jan 22 09:56:45 crc kubenswrapper[5101]: I0122 09:56:45.577304 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:48 crc kubenswrapper[5101]: I0122 09:56:48.363343 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 22 09:56:50 crc kubenswrapper[5101]: I0122 09:56:50.387103 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 22 09:56:50 crc kubenswrapper[5101]: I0122 09:56:50.659558 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 22 09:56:51 crc kubenswrapper[5101]: I0122 09:56:51.157180 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 22 09:56:51 crc kubenswrapper[5101]: I0122 09:56:51.602836 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 22 09:56:51 crc kubenswrapper[5101]: I0122 09:56:51.925125 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 22 09:56:53 crc kubenswrapper[5101]: I0122 09:56:53.295698 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 22 09:56:53 crc kubenswrapper[5101]: I0122 09:56:53.298006 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 09:56:53 crc kubenswrapper[5101]: I0122 09:56:53.298067 5101 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="6ca1bcc5c4f8742815c0243f5588715404d7d2794fedb1d6b44ca6fff00ae60c" exitCode=137 Jan 22 09:56:53 crc kubenswrapper[5101]: I0122 09:56:53.298211 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"6ca1bcc5c4f8742815c0243f5588715404d7d2794fedb1d6b44ca6fff00ae60c"} Jan 22 09:56:53 crc kubenswrapper[5101]: I0122 09:56:53.298299 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"102ddbb2f0342f78a5251ce194bba84ed6bd78a7a315a4482096dfe554161174"} Jan 22 09:56:53 crc kubenswrapper[5101]: I0122 09:56:53.298330 5101 scope.go:117] "RemoveContainer" containerID="1e0fdef7e068877da6a86fa0b15c2d38514c28f6645ddbfab0a7598309b595a9" Jan 22 09:56:54 crc kubenswrapper[5101]: I0122 09:56:54.304988 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 22 09:56:54 crc kubenswrapper[5101]: I0122 09:56:54.390655 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 22 09:56:54 crc kubenswrapper[5101]: I0122 09:56:54.406975 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 22 09:56:54 crc kubenswrapper[5101]: I0122 09:56:54.545571 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 22 09:56:55 crc kubenswrapper[5101]: I0122 09:56:55.012940 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 22 09:56:56 crc kubenswrapper[5101]: I0122 09:56:56.135212 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 22 09:56:56 crc kubenswrapper[5101]: I0122 09:56:56.162865 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 22 09:56:56 crc kubenswrapper[5101]: I0122 09:56:56.841887 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 22 09:56:56 crc kubenswrapper[5101]: I0122 09:56:56.875749 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 22 09:56:58 crc kubenswrapper[5101]: I0122 09:56:58.843989 5101 ???:1] "http: TLS handshake error from 192.168.126.11:37262: no serving certificate available for the kubelet" Jan 22 09:56:59 crc kubenswrapper[5101]: I0122 09:56:59.229406 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 22 09:56:59 crc kubenswrapper[5101]: I0122 09:56:59.644823 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 09:56:59 crc kubenswrapper[5101]: I0122 09:56:59.683392 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 22 09:56:59 crc kubenswrapper[5101]: I0122 09:56:59.955022 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 22 09:57:00 crc kubenswrapper[5101]: I0122 09:57:00.215160 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 22 09:57:02 crc kubenswrapper[5101]: I0122 09:57:02.313592 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 09:57:02 crc kubenswrapper[5101]: I0122 09:57:02.373622 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:57:02 crc kubenswrapper[5101]: I0122 09:57:02.840306 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:57:02 crc kubenswrapper[5101]: I0122 09:57:02.844505 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:57:02 crc kubenswrapper[5101]: I0122 09:57:02.983969 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 09:57:03 crc kubenswrapper[5101]: I0122 09:57:03.359958 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:57:04 crc kubenswrapper[5101]: I0122 09:57:04.846567 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 22 09:57:05 crc kubenswrapper[5101]: I0122 09:57:05.202582 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 22 09:57:07 crc kubenswrapper[5101]: I0122 09:57:07.927465 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 22 09:57:08 crc kubenswrapper[5101]: I0122 09:57:08.654368 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 22 09:57:08 crc kubenswrapper[5101]: I0122 09:57:08.655951 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 22 09:57:08 crc kubenswrapper[5101]: I0122 09:57:08.972552 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 22 09:57:11 crc kubenswrapper[5101]: I0122 09:57:11.370143 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 22 09:57:11 crc kubenswrapper[5101]: I0122 09:57:11.929162 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 22 09:57:12 crc kubenswrapper[5101]: I0122 09:57:12.581396 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.244213 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-nl7z2"] Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.244917 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d1ac98b-01eb-4125-837f-28a4429c09c6" containerName="extract-content" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.244947 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1ac98b-01eb-4125-837f-28a4429c09c6" containerName="extract-content" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.244959 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fa3648a-30f1-4fba-8830-a4c93ff9a88b" containerName="registry-server" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.244967 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa3648a-30f1-4fba-8830-a4c93ff9a88b" containerName="registry-server" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.244984 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e8d5b04-69ec-44a1-adfe-7dfc917e4530" containerName="extract-utilities" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.244991 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8d5b04-69ec-44a1-adfe-7dfc917e4530" containerName="extract-utilities" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245002 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d1ac98b-01eb-4125-837f-28a4429c09c6" containerName="registry-server" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245010 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1ac98b-01eb-4125-837f-28a4429c09c6" containerName="registry-server" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245018 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245026 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245039 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e8d5b04-69ec-44a1-adfe-7dfc917e4530" containerName="registry-server" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245048 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8d5b04-69ec-44a1-adfe-7dfc917e4530" containerName="registry-server" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245055 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e788d99a-4b7e-4d84-bf22-394fb29a2382" containerName="registry-server" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245063 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="e788d99a-4b7e-4d84-bf22-394fb29a2382" containerName="registry-server" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245077 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e788d99a-4b7e-4d84-bf22-394fb29a2382" containerName="extract-content" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245084 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="e788d99a-4b7e-4d84-bf22-394fb29a2382" containerName="extract-content" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245093 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fa3648a-30f1-4fba-8830-a4c93ff9a88b" containerName="extract-utilities" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245100 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa3648a-30f1-4fba-8830-a4c93ff9a88b" containerName="extract-utilities" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245113 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fa3648a-30f1-4fba-8830-a4c93ff9a88b" containerName="extract-content" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245120 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa3648a-30f1-4fba-8830-a4c93ff9a88b" containerName="extract-content" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245128 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e8d5b04-69ec-44a1-adfe-7dfc917e4530" containerName="extract-content" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245134 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8d5b04-69ec-44a1-adfe-7dfc917e4530" containerName="extract-content" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245144 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="43dfdef8-e150-4eba-b790-6c9a395fba76" containerName="marketplace-operator" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245153 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="43dfdef8-e150-4eba-b790-6c9a395fba76" containerName="marketplace-operator" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245164 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" containerName="installer" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245171 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" containerName="installer" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245185 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d1ac98b-01eb-4125-837f-28a4429c09c6" containerName="extract-utilities" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245193 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1ac98b-01eb-4125-837f-28a4429c09c6" containerName="extract-utilities" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245202 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e788d99a-4b7e-4d84-bf22-394fb29a2382" containerName="extract-utilities" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245209 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="e788d99a-4b7e-4d84-bf22-394fb29a2382" containerName="extract-utilities" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245324 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="43dfdef8-e150-4eba-b790-6c9a395fba76" containerName="marketplace-operator" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245338 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="6d1ac98b-01eb-4125-837f-28a4429c09c6" containerName="registry-server" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245351 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245361 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="0fa3648a-30f1-4fba-8830-a4c93ff9a88b" containerName="registry-server" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245370 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="e788d99a-4b7e-4d84-bf22-394fb29a2382" containerName="registry-server" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245380 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="7e8d5b04-69ec-44a1-adfe-7dfc917e4530" containerName="registry-server" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.245389 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="1fcb004c-8428-4e67-92f4-b6ab6cea8bf3" containerName="installer" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.248522 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.251016 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.251070 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-nl7z2"] Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.251630 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.251629 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.251859 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.258242 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.379941 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5c4q\" (UniqueName: \"kubernetes.io/projected/88ce2f44-a985-455c-9cd9-8f8452e92dcd-kube-api-access-l5c4q\") pod \"marketplace-operator-547dbd544d-nl7z2\" (UID: \"88ce2f44-a985-455c-9cd9-8f8452e92dcd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.379994 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88ce2f44-a985-455c-9cd9-8f8452e92dcd-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-nl7z2\" (UID: \"88ce2f44-a985-455c-9cd9-8f8452e92dcd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.380038 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88ce2f44-a985-455c-9cd9-8f8452e92dcd-tmp\") pod \"marketplace-operator-547dbd544d-nl7z2\" (UID: \"88ce2f44-a985-455c-9cd9-8f8452e92dcd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.380201 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88ce2f44-a985-455c-9cd9-8f8452e92dcd-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-nl7z2\" (UID: \"88ce2f44-a985-455c-9cd9-8f8452e92dcd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.481603 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88ce2f44-a985-455c-9cd9-8f8452e92dcd-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-nl7z2\" (UID: \"88ce2f44-a985-455c-9cd9-8f8452e92dcd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.481691 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l5c4q\" (UniqueName: \"kubernetes.io/projected/88ce2f44-a985-455c-9cd9-8f8452e92dcd-kube-api-access-l5c4q\") pod \"marketplace-operator-547dbd544d-nl7z2\" (UID: \"88ce2f44-a985-455c-9cd9-8f8452e92dcd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.481712 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88ce2f44-a985-455c-9cd9-8f8452e92dcd-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-nl7z2\" (UID: \"88ce2f44-a985-455c-9cd9-8f8452e92dcd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.481747 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88ce2f44-a985-455c-9cd9-8f8452e92dcd-tmp\") pod \"marketplace-operator-547dbd544d-nl7z2\" (UID: \"88ce2f44-a985-455c-9cd9-8f8452e92dcd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.482253 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88ce2f44-a985-455c-9cd9-8f8452e92dcd-tmp\") pod \"marketplace-operator-547dbd544d-nl7z2\" (UID: \"88ce2f44-a985-455c-9cd9-8f8452e92dcd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.482949 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88ce2f44-a985-455c-9cd9-8f8452e92dcd-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-nl7z2\" (UID: \"88ce2f44-a985-455c-9cd9-8f8452e92dcd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.486986 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88ce2f44-a985-455c-9cd9-8f8452e92dcd-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-nl7z2\" (UID: \"88ce2f44-a985-455c-9cd9-8f8452e92dcd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.498594 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5c4q\" (UniqueName: \"kubernetes.io/projected/88ce2f44-a985-455c-9cd9-8f8452e92dcd-kube-api-access-l5c4q\") pod \"marketplace-operator-547dbd544d-nl7z2\" (UID: \"88ce2f44-a985-455c-9cd9-8f8452e92dcd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.574099 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.791827 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-nl7z2"] Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.791991 5101 patch_prober.go:28] interesting pod/machine-config-daemon-m45mk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.792178 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" podUID="8450e755-f74e-492f-8007-24e3410a8926" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.792218 5101 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.792843 5101 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e642029df1e7996644ea562837d799e64f830ec7fdd5896604ec6d0b05e56220"} pod="openshift-machine-config-operator/machine-config-daemon-m45mk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.792914 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" podUID="8450e755-f74e-492f-8007-24e3410a8926" containerName="machine-config-daemon" containerID="cri-o://e642029df1e7996644ea562837d799e64f830ec7fdd5896604ec6d0b05e56220" gracePeriod=600 Jan 22 09:57:13 crc kubenswrapper[5101]: I0122 09:57:13.796750 5101 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:57:14 crc kubenswrapper[5101]: I0122 09:57:14.416003 5101 generic.go:358] "Generic (PLEG): container finished" podID="8450e755-f74e-492f-8007-24e3410a8926" containerID="e642029df1e7996644ea562837d799e64f830ec7fdd5896604ec6d0b05e56220" exitCode=0 Jan 22 09:57:14 crc kubenswrapper[5101]: I0122 09:57:14.416100 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" event={"ID":"8450e755-f74e-492f-8007-24e3410a8926","Type":"ContainerDied","Data":"e642029df1e7996644ea562837d799e64f830ec7fdd5896604ec6d0b05e56220"} Jan 22 09:57:14 crc kubenswrapper[5101]: I0122 09:57:14.416504 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" event={"ID":"8450e755-f74e-492f-8007-24e3410a8926","Type":"ContainerStarted","Data":"c98fafbc0bdf5104350cd0edfe9623bd0cbd9fbe271ac3d7a333fbaeadc43f66"} Jan 22 09:57:14 crc kubenswrapper[5101]: I0122 09:57:14.418252 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" event={"ID":"88ce2f44-a985-455c-9cd9-8f8452e92dcd","Type":"ContainerStarted","Data":"42b9ed1d0d8f45f39573867b431a7d3ff614db69a04b7dd1047f527dfd2236c6"} Jan 22 09:57:14 crc kubenswrapper[5101]: I0122 09:57:14.418279 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" event={"ID":"88ce2f44-a985-455c-9cd9-8f8452e92dcd","Type":"ContainerStarted","Data":"776bbc9b0577ab38563f87f738b1f44ab8cbdda093a12a57e465a30637e2ec1e"} Jan 22 09:57:14 crc kubenswrapper[5101]: I0122 09:57:14.418608 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:14 crc kubenswrapper[5101]: I0122 09:57:14.423973 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" Jan 22 09:57:14 crc kubenswrapper[5101]: I0122 09:57:14.462986 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-nl7z2" podStartSLOduration=1.462962638 podStartE2EDuration="1.462962638s" podCreationTimestamp="2026-01-22 09:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:57:14.458780223 +0000 UTC m=+306.902410490" watchObservedRunningTime="2026-01-22 09:57:14.462962638 +0000 UTC m=+306.906592905" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.076455 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-5w42p"] Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.171520 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-5w42p"] Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.171728 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.291091 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/66ac8b47-655b-4b61-8a26-368512f77608-registry-certificates\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.291152 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/66ac8b47-655b-4b61-8a26-368512f77608-registry-tls\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.291216 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.291247 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/66ac8b47-655b-4b61-8a26-368512f77608-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.291509 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66ac8b47-655b-4b61-8a26-368512f77608-trusted-ca\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.291578 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/66ac8b47-655b-4b61-8a26-368512f77608-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.291651 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/66ac8b47-655b-4b61-8a26-368512f77608-bound-sa-token\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.291847 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgqgc\" (UniqueName: \"kubernetes.io/projected/66ac8b47-655b-4b61-8a26-368512f77608-kube-api-access-mgqgc\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.311646 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.393327 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/66ac8b47-655b-4b61-8a26-368512f77608-bound-sa-token\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.393491 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mgqgc\" (UniqueName: \"kubernetes.io/projected/66ac8b47-655b-4b61-8a26-368512f77608-kube-api-access-mgqgc\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.393553 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/66ac8b47-655b-4b61-8a26-368512f77608-registry-certificates\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.393583 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/66ac8b47-655b-4b61-8a26-368512f77608-registry-tls\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.393731 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/66ac8b47-655b-4b61-8a26-368512f77608-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.393783 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66ac8b47-655b-4b61-8a26-368512f77608-trusted-ca\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.393806 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/66ac8b47-655b-4b61-8a26-368512f77608-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.394692 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/66ac8b47-655b-4b61-8a26-368512f77608-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.395439 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66ac8b47-655b-4b61-8a26-368512f77608-trusted-ca\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.395501 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/66ac8b47-655b-4b61-8a26-368512f77608-registry-certificates\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.403460 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/66ac8b47-655b-4b61-8a26-368512f77608-registry-tls\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.405008 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/66ac8b47-655b-4b61-8a26-368512f77608-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.409987 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgqgc\" (UniqueName: \"kubernetes.io/projected/66ac8b47-655b-4b61-8a26-368512f77608-kube-api-access-mgqgc\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.410625 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/66ac8b47-655b-4b61-8a26-368512f77608-bound-sa-token\") pod \"image-registry-5d9d95bf5b-5w42p\" (UID: \"66ac8b47-655b-4b61-8a26-368512f77608\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.486612 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:19 crc kubenswrapper[5101]: I0122 09:57:19.722354 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-5w42p"] Jan 22 09:57:20 crc kubenswrapper[5101]: I0122 09:57:20.454511 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" event={"ID":"66ac8b47-655b-4b61-8a26-368512f77608","Type":"ContainerStarted","Data":"0dd53a671ae1b779f51c63c0dbeb12910ba001f0c2105288067cdc7ff8258398"} Jan 22 09:57:20 crc kubenswrapper[5101]: I0122 09:57:20.454936 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:20 crc kubenswrapper[5101]: I0122 09:57:20.454964 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" event={"ID":"66ac8b47-655b-4b61-8a26-368512f77608","Type":"ContainerStarted","Data":"e6ace532762365b31fbe5f9efb5d0fbf6f538ef29ab4e662dcbc1c77f6a0619d"} Jan 22 09:57:20 crc kubenswrapper[5101]: I0122 09:57:20.473917 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" podStartSLOduration=1.473894231 podStartE2EDuration="1.473894231s" podCreationTimestamp="2026-01-22 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:57:20.470578202 +0000 UTC m=+312.914208469" watchObservedRunningTime="2026-01-22 09:57:20.473894231 +0000 UTC m=+312.917524488" Jan 22 09:57:29 crc kubenswrapper[5101]: I0122 09:57:29.844854 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-64f6k"] Jan 22 09:57:29 crc kubenswrapper[5101]: I0122 09:57:29.846297 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" podUID="1aa3720b-6520-49ef-96d2-bf634f1a5f8c" containerName="controller-manager" containerID="cri-o://3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259" gracePeriod=30 Jan 22 09:57:29 crc kubenswrapper[5101]: I0122 09:57:29.853883 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f"] Jan 22 09:57:29 crc kubenswrapper[5101]: I0122 09:57:29.854174 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" podUID="2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" containerName="route-controller-manager" containerID="cri-o://779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30" gracePeriod=30 Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.232804 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.260470 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5674756756-xbblw"] Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.261160 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" containerName="route-controller-manager" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.261185 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" containerName="route-controller-manager" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.261302 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" containerName="route-controller-manager" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.268803 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.279270 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5674756756-xbblw"] Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.301910 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.332586 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-868db66c75-jc98d"] Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.333741 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1aa3720b-6520-49ef-96d2-bf634f1a5f8c" containerName="controller-manager" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.333868 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa3720b-6520-49ef-96d2-bf634f1a5f8c" containerName="controller-manager" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.334064 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="1aa3720b-6520-49ef-96d2-bf634f1a5f8c" containerName="controller-manager" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.341079 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868db66c75-jc98d"] Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.341433 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.354529 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-tmp\") pod \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.354596 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-config\") pod \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.354693 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-serving-cert\") pod \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.355007 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8dcs\" (UniqueName: \"kubernetes.io/projected/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-kube-api-access-f8dcs\") pod \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.355194 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-client-ca\") pod \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\" (UID: \"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb\") " Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.355475 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdbj6\" (UniqueName: \"kubernetes.io/projected/2ca92efd-8d3f-44e9-b93d-4832efb4298a-kube-api-access-rdbj6\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.355548 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-tmp" (OuterVolumeSpecName: "tmp") pod "2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" (UID: "2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.355627 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ca92efd-8d3f-44e9-b93d-4832efb4298a-client-ca\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.355662 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca92efd-8d3f-44e9-b93d-4832efb4298a-config\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.355774 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2ca92efd-8d3f-44e9-b93d-4832efb4298a-tmp\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.355773 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-config" (OuterVolumeSpecName: "config") pod "2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" (UID: "2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.355800 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ca92efd-8d3f-44e9-b93d-4832efb4298a-serving-cert\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.355973 5101 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.355985 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.356236 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-client-ca" (OuterVolumeSpecName: "client-ca") pod "2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" (UID: "2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.362916 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-kube-api-access-f8dcs" (OuterVolumeSpecName: "kube-api-access-f8dcs") pod "2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" (UID: "2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb"). InnerVolumeSpecName "kube-api-access-f8dcs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.363879 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" (UID: "2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.456620 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-serving-cert\") pod \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.456801 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sp4f\" (UniqueName: \"kubernetes.io/projected/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-kube-api-access-6sp4f\") pod \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.456830 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-client-ca\") pod \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.456854 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-config\") pod \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.456914 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-proxy-ca-bundles\") pod \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.456980 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-tmp\") pod \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\" (UID: \"1aa3720b-6520-49ef-96d2-bf634f1a5f8c\") " Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.457120 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-tmp\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.457191 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2ca92efd-8d3f-44e9-b93d-4832efb4298a-tmp\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.457285 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ca92efd-8d3f-44e9-b93d-4832efb4298a-serving-cert\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.457593 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-tmp" (OuterVolumeSpecName: "tmp") pod "1aa3720b-6520-49ef-96d2-bf634f1a5f8c" (UID: "1aa3720b-6520-49ef-96d2-bf634f1a5f8c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.457833 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1aa3720b-6520-49ef-96d2-bf634f1a5f8c" (UID: "1aa3720b-6520-49ef-96d2-bf634f1a5f8c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.457859 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-client-ca" (OuterVolumeSpecName: "client-ca") pod "1aa3720b-6520-49ef-96d2-bf634f1a5f8c" (UID: "1aa3720b-6520-49ef-96d2-bf634f1a5f8c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.457928 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-config" (OuterVolumeSpecName: "config") pod "1aa3720b-6520-49ef-96d2-bf634f1a5f8c" (UID: "1aa3720b-6520-49ef-96d2-bf634f1a5f8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.458390 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2ca92efd-8d3f-44e9-b93d-4832efb4298a-tmp\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.458540 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-client-ca\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.458611 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rdbj6\" (UniqueName: \"kubernetes.io/projected/2ca92efd-8d3f-44e9-b93d-4832efb4298a-kube-api-access-rdbj6\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.458779 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd6fb\" (UniqueName: \"kubernetes.io/projected/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-kube-api-access-xd6fb\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.458877 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-proxy-ca-bundles\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.458910 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-serving-cert\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.458935 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ca92efd-8d3f-44e9-b93d-4832efb4298a-client-ca\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.459002 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca92efd-8d3f-44e9-b93d-4832efb4298a-config\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.459068 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-config\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.460081 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ca92efd-8d3f-44e9-b93d-4832efb4298a-client-ca\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.460367 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca92efd-8d3f-44e9-b93d-4832efb4298a-config\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.460462 5101 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.460486 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.460499 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f8dcs\" (UniqueName: \"kubernetes.io/projected/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-kube-api-access-f8dcs\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.460512 5101 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.460522 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.460532 5101 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.460544 5101 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.461052 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-kube-api-access-6sp4f" (OuterVolumeSpecName: "kube-api-access-6sp4f") pod "1aa3720b-6520-49ef-96d2-bf634f1a5f8c" (UID: "1aa3720b-6520-49ef-96d2-bf634f1a5f8c"). InnerVolumeSpecName "kube-api-access-6sp4f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.461531 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1aa3720b-6520-49ef-96d2-bf634f1a5f8c" (UID: "1aa3720b-6520-49ef-96d2-bf634f1a5f8c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.462817 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ca92efd-8d3f-44e9-b93d-4832efb4298a-serving-cert\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.474342 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdbj6\" (UniqueName: \"kubernetes.io/projected/2ca92efd-8d3f-44e9-b93d-4832efb4298a-kube-api-access-rdbj6\") pod \"route-controller-manager-5674756756-xbblw\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.514885 5101 generic.go:358] "Generic (PLEG): container finished" podID="2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" containerID="779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30" exitCode=0 Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.515007 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" event={"ID":"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb","Type":"ContainerDied","Data":"779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30"} Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.515038 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" event={"ID":"2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb","Type":"ContainerDied","Data":"34d9b45318a79048dc3074903e235097e7d8189ab8c639e369da7ce8d554b9f3"} Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.515055 5101 scope.go:117] "RemoveContainer" containerID="779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.515049 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.517230 5101 generic.go:358] "Generic (PLEG): container finished" podID="1aa3720b-6520-49ef-96d2-bf634f1a5f8c" containerID="3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259" exitCode=0 Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.517327 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.517374 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" event={"ID":"1aa3720b-6520-49ef-96d2-bf634f1a5f8c","Type":"ContainerDied","Data":"3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259"} Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.517393 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-64f6k" event={"ID":"1aa3720b-6520-49ef-96d2-bf634f1a5f8c","Type":"ContainerDied","Data":"80e4e3c2a130aa0f6001ce39486c9405f35496273009427ec6cf24811173cc02"} Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.537776 5101 scope.go:117] "RemoveContainer" containerID="779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30" Jan 22 09:57:30 crc kubenswrapper[5101]: E0122 09:57:30.538238 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30\": container with ID starting with 779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30 not found: ID does not exist" containerID="779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.538332 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30"} err="failed to get container status \"779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30\": rpc error: code = NotFound desc = could not find container \"779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30\": container with ID starting with 779c244a4644a02e7ba0c447471576431b7be410a586f8bbae0f227dc323ee30 not found: ID does not exist" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.538362 5101 scope.go:117] "RemoveContainer" containerID="3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.556716 5101 scope.go:117] "RemoveContainer" containerID="3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.556698 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f"] Jan 22 09:57:30 crc kubenswrapper[5101]: E0122 09:57:30.557310 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259\": container with ID starting with 3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259 not found: ID does not exist" containerID="3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.557359 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259"} err="failed to get container status \"3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259\": rpc error: code = NotFound desc = could not find container \"3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259\": container with ID starting with 3595deb16a90f6acd87b1e958d82bd0181ad3e4780bacd49c9499cfc8f236259 not found: ID does not exist" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.559506 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f"] Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.561635 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-client-ca\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.561857 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xd6fb\" (UniqueName: \"kubernetes.io/projected/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-kube-api-access-xd6fb\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.561997 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-proxy-ca-bundles\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.562106 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-serving-cert\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.562249 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-config\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.562355 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-tmp\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.562504 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.562620 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6sp4f\" (UniqueName: \"kubernetes.io/projected/1aa3720b-6520-49ef-96d2-bf634f1a5f8c-kube-api-access-6sp4f\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.562670 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-client-ca\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.563241 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-tmp\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.563317 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-proxy-ca-bundles\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.563965 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-config\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.569279 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-serving-cert\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.571200 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-64f6k"] Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.576059 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-64f6k"] Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.582731 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd6fb\" (UniqueName: \"kubernetes.io/projected/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-kube-api-access-xd6fb\") pod \"controller-manager-868db66c75-jc98d\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.612604 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.664364 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:30 crc kubenswrapper[5101]: I0122 09:57:30.843663 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868db66c75-jc98d"] Jan 22 09:57:30 crc kubenswrapper[5101]: W0122 09:57:30.848959 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a20b5fe_58e0_458d_aeb1_05b9ea5487db.slice/crio-8246d481e26d856a01041869755b723aa697d0a7d689d7d1443cd18dcdb85dfe WatchSource:0}: Error finding container 8246d481e26d856a01041869755b723aa697d0a7d689d7d1443cd18dcdb85dfe: Status 404 returned error can't find the container with id 8246d481e26d856a01041869755b723aa697d0a7d689d7d1443cd18dcdb85dfe Jan 22 09:57:31 crc kubenswrapper[5101]: I0122 09:57:31.012443 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5674756756-xbblw"] Jan 22 09:57:31 crc kubenswrapper[5101]: I0122 09:57:31.123707 5101 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-flq7f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:57:31 crc kubenswrapper[5101]: I0122 09:57:31.123867 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-flq7f" podUID="2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 09:57:31 crc kubenswrapper[5101]: I0122 09:57:31.525558 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" event={"ID":"6a20b5fe-58e0-458d-aeb1-05b9ea5487db","Type":"ContainerStarted","Data":"ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250"} Jan 22 09:57:31 crc kubenswrapper[5101]: I0122 09:57:31.525622 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" event={"ID":"6a20b5fe-58e0-458d-aeb1-05b9ea5487db","Type":"ContainerStarted","Data":"8246d481e26d856a01041869755b723aa697d0a7d689d7d1443cd18dcdb85dfe"} Jan 22 09:57:31 crc kubenswrapper[5101]: I0122 09:57:31.528926 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" event={"ID":"2ca92efd-8d3f-44e9-b93d-4832efb4298a","Type":"ContainerStarted","Data":"963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797"} Jan 22 09:57:31 crc kubenswrapper[5101]: I0122 09:57:31.528970 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" event={"ID":"2ca92efd-8d3f-44e9-b93d-4832efb4298a","Type":"ContainerStarted","Data":"152ef42e950da59ca32c5fdd34046d8d65868c963c9f46e0477dfaf838a92e48"} Jan 22 09:57:31 crc kubenswrapper[5101]: I0122 09:57:31.529308 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:31 crc kubenswrapper[5101]: I0122 09:57:31.550420 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" podStartSLOduration=1.550395563 podStartE2EDuration="1.550395563s" podCreationTimestamp="2026-01-22 09:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:57:31.5499589 +0000 UTC m=+323.993589187" watchObservedRunningTime="2026-01-22 09:57:31.550395563 +0000 UTC m=+323.994025830" Jan 22 09:57:31 crc kubenswrapper[5101]: I0122 09:57:31.811628 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:31 crc kubenswrapper[5101]: I0122 09:57:31.831616 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" podStartSLOduration=1.8315968360000001 podStartE2EDuration="1.831596836s" podCreationTimestamp="2026-01-22 09:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:57:31.572113952 +0000 UTC m=+324.015744209" watchObservedRunningTime="2026-01-22 09:57:31.831596836 +0000 UTC m=+324.275227103" Jan 22 09:57:32 crc kubenswrapper[5101]: I0122 09:57:32.293817 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-7gkpq"] Jan 22 09:57:32 crc kubenswrapper[5101]: I0122 09:57:32.535865 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aa3720b-6520-49ef-96d2-bf634f1a5f8c" path="/var/lib/kubelet/pods/1aa3720b-6520-49ef-96d2-bf634f1a5f8c/volumes" Jan 22 09:57:32 crc kubenswrapper[5101]: I0122 09:57:32.536839 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb" path="/var/lib/kubelet/pods/2fdd0ac3-bc6e-4ecc-8571-66b52d33e1fb/volumes" Jan 22 09:57:32 crc kubenswrapper[5101]: I0122 09:57:32.537342 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:32 crc kubenswrapper[5101]: I0122 09:57:32.541686 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:36 crc kubenswrapper[5101]: I0122 09:57:36.312911 5101 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 09:57:41 crc kubenswrapper[5101]: I0122 09:57:41.467598 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-5w42p" Jan 22 09:57:41 crc kubenswrapper[5101]: I0122 09:57:41.519531 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-7pcd5"] Jan 22 09:57:49 crc kubenswrapper[5101]: I0122 09:57:49.874379 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-868db66c75-jc98d"] Jan 22 09:57:49 crc kubenswrapper[5101]: I0122 09:57:49.879831 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" podUID="6a20b5fe-58e0-458d-aeb1-05b9ea5487db" containerName="controller-manager" containerID="cri-o://ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250" gracePeriod=30 Jan 22 09:57:49 crc kubenswrapper[5101]: I0122 09:57:49.884000 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5674756756-xbblw"] Jan 22 09:57:49 crc kubenswrapper[5101]: I0122 09:57:49.884479 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" podUID="2ca92efd-8d3f-44e9-b93d-4832efb4298a" containerName="route-controller-manager" containerID="cri-o://963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797" gracePeriod=30 Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.359833 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.383300 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg"] Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.383882 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ca92efd-8d3f-44e9-b93d-4832efb4298a" containerName="route-controller-manager" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.383895 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca92efd-8d3f-44e9-b93d-4832efb4298a" containerName="route-controller-manager" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.384004 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="2ca92efd-8d3f-44e9-b93d-4832efb4298a" containerName="route-controller-manager" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.387254 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.445581 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ca92efd-8d3f-44e9-b93d-4832efb4298a-client-ca\") pod \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.445651 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca92efd-8d3f-44e9-b93d-4832efb4298a-config\") pod \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.445790 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ca92efd-8d3f-44e9-b93d-4832efb4298a-serving-cert\") pod \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.445861 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdbj6\" (UniqueName: \"kubernetes.io/projected/2ca92efd-8d3f-44e9-b93d-4832efb4298a-kube-api-access-rdbj6\") pod \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.445891 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2ca92efd-8d3f-44e9-b93d-4832efb4298a-tmp\") pod \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\" (UID: \"2ca92efd-8d3f-44e9-b93d-4832efb4298a\") " Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.446242 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca92efd-8d3f-44e9-b93d-4832efb4298a-tmp" (OuterVolumeSpecName: "tmp") pod "2ca92efd-8d3f-44e9-b93d-4832efb4298a" (UID: "2ca92efd-8d3f-44e9-b93d-4832efb4298a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.446483 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca92efd-8d3f-44e9-b93d-4832efb4298a-client-ca" (OuterVolumeSpecName: "client-ca") pod "2ca92efd-8d3f-44e9-b93d-4832efb4298a" (UID: "2ca92efd-8d3f-44e9-b93d-4832efb4298a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.446502 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca92efd-8d3f-44e9-b93d-4832efb4298a-config" (OuterVolumeSpecName: "config") pod "2ca92efd-8d3f-44e9-b93d-4832efb4298a" (UID: "2ca92efd-8d3f-44e9-b93d-4832efb4298a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.446536 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1d353e75-239b-4af7-8e71-6d25ad1fe394-tmp\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.446613 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9xc5\" (UniqueName: \"kubernetes.io/projected/1d353e75-239b-4af7-8e71-6d25ad1fe394-kube-api-access-f9xc5\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.446694 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d353e75-239b-4af7-8e71-6d25ad1fe394-serving-cert\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.446754 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d353e75-239b-4af7-8e71-6d25ad1fe394-client-ca\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.446792 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d353e75-239b-4af7-8e71-6d25ad1fe394-config\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.446859 5101 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ca92efd-8d3f-44e9-b93d-4832efb4298a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.446874 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca92efd-8d3f-44e9-b93d-4832efb4298a-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.446899 5101 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2ca92efd-8d3f-44e9-b93d-4832efb4298a-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.458712 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca92efd-8d3f-44e9-b93d-4832efb4298a-kube-api-access-rdbj6" (OuterVolumeSpecName: "kube-api-access-rdbj6") pod "2ca92efd-8d3f-44e9-b93d-4832efb4298a" (UID: "2ca92efd-8d3f-44e9-b93d-4832efb4298a"). InnerVolumeSpecName "kube-api-access-rdbj6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.461322 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca92efd-8d3f-44e9-b93d-4832efb4298a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2ca92efd-8d3f-44e9-b93d-4832efb4298a" (UID: "2ca92efd-8d3f-44e9-b93d-4832efb4298a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.481087 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg"] Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.548269 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d353e75-239b-4af7-8e71-6d25ad1fe394-serving-cert\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.548345 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d353e75-239b-4af7-8e71-6d25ad1fe394-client-ca\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.548368 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d353e75-239b-4af7-8e71-6d25ad1fe394-config\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.548389 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1d353e75-239b-4af7-8e71-6d25ad1fe394-tmp\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.548449 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9xc5\" (UniqueName: \"kubernetes.io/projected/1d353e75-239b-4af7-8e71-6d25ad1fe394-kube-api-access-f9xc5\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.548498 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ca92efd-8d3f-44e9-b93d-4832efb4298a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.548511 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rdbj6\" (UniqueName: \"kubernetes.io/projected/2ca92efd-8d3f-44e9-b93d-4832efb4298a-kube-api-access-rdbj6\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.549862 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1d353e75-239b-4af7-8e71-6d25ad1fe394-tmp\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.549878 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d353e75-239b-4af7-8e71-6d25ad1fe394-client-ca\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.550121 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d353e75-239b-4af7-8e71-6d25ad1fe394-config\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.552730 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.553410 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d353e75-239b-4af7-8e71-6d25ad1fe394-serving-cert\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.566200 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9xc5\" (UniqueName: \"kubernetes.io/projected/1d353e75-239b-4af7-8e71-6d25ad1fe394-kube-api-access-f9xc5\") pod \"route-controller-manager-5968644ccb-pq9jg\" (UID: \"1d353e75-239b-4af7-8e71-6d25ad1fe394\") " pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.581574 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68648f5d75-shh67"] Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.582341 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6a20b5fe-58e0-458d-aeb1-05b9ea5487db" containerName="controller-manager" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.582368 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a20b5fe-58e0-458d-aeb1-05b9ea5487db" containerName="controller-manager" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.582505 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="6a20b5fe-58e0-458d-aeb1-05b9ea5487db" containerName="controller-manager" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.589782 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.591393 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68648f5d75-shh67"] Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.628279 5101 generic.go:358] "Generic (PLEG): container finished" podID="6a20b5fe-58e0-458d-aeb1-05b9ea5487db" containerID="ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250" exitCode=0 Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.628528 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" event={"ID":"6a20b5fe-58e0-458d-aeb1-05b9ea5487db","Type":"ContainerDied","Data":"ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250"} Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.628640 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" event={"ID":"6a20b5fe-58e0-458d-aeb1-05b9ea5487db","Type":"ContainerDied","Data":"8246d481e26d856a01041869755b723aa697d0a7d689d7d1443cd18dcdb85dfe"} Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.628706 5101 scope.go:117] "RemoveContainer" containerID="ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.628903 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868db66c75-jc98d" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.629803 5101 generic.go:358] "Generic (PLEG): container finished" podID="2ca92efd-8d3f-44e9-b93d-4832efb4298a" containerID="963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797" exitCode=0 Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.629973 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" event={"ID":"2ca92efd-8d3f-44e9-b93d-4832efb4298a","Type":"ContainerDied","Data":"963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797"} Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.629997 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" event={"ID":"2ca92efd-8d3f-44e9-b93d-4832efb4298a","Type":"ContainerDied","Data":"152ef42e950da59ca32c5fdd34046d8d65868c963c9f46e0477dfaf838a92e48"} Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.630054 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5674756756-xbblw" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.649600 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-tmp\") pod \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.649715 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-proxy-ca-bundles\") pod \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.649789 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-serving-cert\") pod \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.649848 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd6fb\" (UniqueName: \"kubernetes.io/projected/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-kube-api-access-xd6fb\") pod \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.649868 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-client-ca\") pod \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.649949 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-config\") pod \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\" (UID: \"6a20b5fe-58e0-458d-aeb1-05b9ea5487db\") " Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.650126 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-config\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.650152 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-client-ca\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.650212 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-proxy-ca-bundles\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.650250 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-serving-cert\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.650335 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf2np\" (UniqueName: \"kubernetes.io/projected/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-kube-api-access-mf2np\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.650365 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-tmp\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.650933 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-tmp" (OuterVolumeSpecName: "tmp") pod "6a20b5fe-58e0-458d-aeb1-05b9ea5487db" (UID: "6a20b5fe-58e0-458d-aeb1-05b9ea5487db"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.651543 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6a20b5fe-58e0-458d-aeb1-05b9ea5487db" (UID: "6a20b5fe-58e0-458d-aeb1-05b9ea5487db"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.654672 5101 scope.go:117] "RemoveContainer" containerID="ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.654786 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6a20b5fe-58e0-458d-aeb1-05b9ea5487db" (UID: "6a20b5fe-58e0-458d-aeb1-05b9ea5487db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.656269 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-client-ca" (OuterVolumeSpecName: "client-ca") pod "6a20b5fe-58e0-458d-aeb1-05b9ea5487db" (UID: "6a20b5fe-58e0-458d-aeb1-05b9ea5487db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.657540 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-kube-api-access-xd6fb" (OuterVolumeSpecName: "kube-api-access-xd6fb") pod "6a20b5fe-58e0-458d-aeb1-05b9ea5487db" (UID: "6a20b5fe-58e0-458d-aeb1-05b9ea5487db"). InnerVolumeSpecName "kube-api-access-xd6fb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:57:50 crc kubenswrapper[5101]: E0122 09:57:50.657581 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250\": container with ID starting with ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250 not found: ID does not exist" containerID="ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.657636 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250"} err="failed to get container status \"ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250\": rpc error: code = NotFound desc = could not find container \"ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250\": container with ID starting with ef720baa96455bbde33de696f0262120cd2da710955ee4b504306e9599d8b250 not found: ID does not exist" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.657671 5101 scope.go:117] "RemoveContainer" containerID="963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.661322 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-config" (OuterVolumeSpecName: "config") pod "6a20b5fe-58e0-458d-aeb1-05b9ea5487db" (UID: "6a20b5fe-58e0-458d-aeb1-05b9ea5487db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.666094 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5674756756-xbblw"] Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.671956 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5674756756-xbblw"] Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.686673 5101 scope.go:117] "RemoveContainer" containerID="963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797" Jan 22 09:57:50 crc kubenswrapper[5101]: E0122 09:57:50.687154 5101 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797\": container with ID starting with 963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797 not found: ID does not exist" containerID="963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.687202 5101 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797"} err="failed to get container status \"963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797\": rpc error: code = NotFound desc = could not find container \"963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797\": container with ID starting with 963f3df2f2947deaa4224b728ec80609a8d30399962be5c242c439b8d2645797 not found: ID does not exist" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.751467 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mf2np\" (UniqueName: \"kubernetes.io/projected/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-kube-api-access-mf2np\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.751541 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-tmp\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.752010 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-tmp\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.752057 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-config\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.752093 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-client-ca\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.752726 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-client-ca\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.752128 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-proxy-ca-bundles\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.752833 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-serving-cert\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.753142 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-config\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.753186 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-proxy-ca-bundles\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.753358 5101 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.753385 5101 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.753395 5101 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.753404 5101 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.753413 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xd6fb\" (UniqueName: \"kubernetes.io/projected/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-kube-api-access-xd6fb\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.753437 5101 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a20b5fe-58e0-458d-aeb1-05b9ea5487db-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.756084 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-serving-cert\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.768329 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf2np\" (UniqueName: \"kubernetes.io/projected/03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5-kube-api-access-mf2np\") pod \"controller-manager-68648f5d75-shh67\" (UID: \"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5\") " pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.782406 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.910268 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.958722 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg"] Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.975970 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-868db66c75-jc98d"] Jan 22 09:57:50 crc kubenswrapper[5101]: I0122 09:57:50.980525 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-868db66c75-jc98d"] Jan 22 09:57:51 crc kubenswrapper[5101]: I0122 09:57:51.116841 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68648f5d75-shh67"] Jan 22 09:57:51 crc kubenswrapper[5101]: I0122 09:57:51.635844 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" event={"ID":"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5","Type":"ContainerStarted","Data":"58ddb5bec99b738fd757f0de813e63ccbb38da154e7133afb4dc4874acbf0bc7"} Jan 22 09:57:51 crc kubenswrapper[5101]: I0122 09:57:51.637615 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" event={"ID":"1d353e75-239b-4af7-8e71-6d25ad1fe394","Type":"ContainerStarted","Data":"66d0175967c1b684c46451e92f4b5886bcb694f5119d1b4abd35d85a9be74ab8"} Jan 22 09:57:51 crc kubenswrapper[5101]: I0122 09:57:51.637702 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" event={"ID":"1d353e75-239b-4af7-8e71-6d25ad1fe394","Type":"ContainerStarted","Data":"76988aa523a4dd7ee5a20a9c6829bcf451018aa2f83f390b5bd1672c9047be2c"} Jan 22 09:57:51 crc kubenswrapper[5101]: I0122 09:57:51.637941 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:51 crc kubenswrapper[5101]: I0122 09:57:51.657922 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" podStartSLOduration=2.657899377 podStartE2EDuration="2.657899377s" podCreationTimestamp="2026-01-22 09:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:57:51.657519736 +0000 UTC m=+344.101150023" watchObservedRunningTime="2026-01-22 09:57:51.657899377 +0000 UTC m=+344.101529644" Jan 22 09:57:51 crc kubenswrapper[5101]: I0122 09:57:51.955837 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5968644ccb-pq9jg" Jan 22 09:57:52 crc kubenswrapper[5101]: I0122 09:57:52.545135 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca92efd-8d3f-44e9-b93d-4832efb4298a" path="/var/lib/kubelet/pods/2ca92efd-8d3f-44e9-b93d-4832efb4298a/volumes" Jan 22 09:57:52 crc kubenswrapper[5101]: I0122 09:57:52.545751 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a20b5fe-58e0-458d-aeb1-05b9ea5487db" path="/var/lib/kubelet/pods/6a20b5fe-58e0-458d-aeb1-05b9ea5487db/volumes" Jan 22 09:57:52 crc kubenswrapper[5101]: I0122 09:57:52.648928 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" event={"ID":"03bc6f59-e45c-4036-b1c6-c33bf3ca8eb5","Type":"ContainerStarted","Data":"713a7eafea611808d161127a57028b14739ec8e3073137c263e91ca5c89873cf"} Jan 22 09:57:53 crc kubenswrapper[5101]: I0122 09:57:53.656105 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:53 crc kubenswrapper[5101]: I0122 09:57:53.662664 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" Jan 22 09:57:53 crc kubenswrapper[5101]: I0122 09:57:53.677101 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68648f5d75-shh67" podStartSLOduration=4.677080758 podStartE2EDuration="4.677080758s" podCreationTimestamp="2026-01-22 09:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:57:52.66873843 +0000 UTC m=+345.112368717" watchObservedRunningTime="2026-01-22 09:57:53.677080758 +0000 UTC m=+346.120711025" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.320844 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" podUID="16e791e1-266c-46d9-a6cb-d6c7e48d4df9" containerName="oauth-openshift" containerID="cri-o://79480be7d0af0ff9be6d8ea5c0ccfc0f84e19f378235c9269038a674b3002cbe" gracePeriod=15 Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.681303 5101 generic.go:358] "Generic (PLEG): container finished" podID="16e791e1-266c-46d9-a6cb-d6c7e48d4df9" containerID="79480be7d0af0ff9be6d8ea5c0ccfc0f84e19f378235c9269038a674b3002cbe" exitCode=0 Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.681610 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" event={"ID":"16e791e1-266c-46d9-a6cb-d6c7e48d4df9","Type":"ContainerDied","Data":"79480be7d0af0ff9be6d8ea5c0ccfc0f84e19f378235c9269038a674b3002cbe"} Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.734204 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.768761 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr"] Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.769334 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16e791e1-266c-46d9-a6cb-d6c7e48d4df9" containerName="oauth-openshift" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.769351 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e791e1-266c-46d9-a6cb-d6c7e48d4df9" containerName="oauth-openshift" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.769500 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="16e791e1-266c-46d9-a6cb-d6c7e48d4df9" containerName="oauth-openshift" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.787317 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr"] Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.787494 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.853690 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-router-certs\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.853786 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-audit-policies\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.853877 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-trusted-ca-bundle\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.853906 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-ocp-branding-template\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.853959 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-session\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.853994 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nlfv\" (UniqueName: \"kubernetes.io/projected/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-kube-api-access-8nlfv\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854022 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-login\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854047 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-error\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854075 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-cliconfig\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854103 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-serving-cert\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854225 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-service-ca\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854259 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-idp-0-file-data\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854286 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-audit-dir\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854315 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-provider-selection\") pod \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\" (UID: \"16e791e1-266c-46d9-a6cb-d6c7e48d4df9\") " Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854515 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-router-certs\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854564 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-user-template-login\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854620 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854657 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-user-template-error\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854680 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-audit-dir\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854703 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854723 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854677 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854767 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854823 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.854830 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.855034 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.855075 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-audit-policies\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.855124 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-session\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.855296 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.855305 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.855433 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z56qp\" (UniqueName: \"kubernetes.io/projected/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-kube-api-access-z56qp\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.855472 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-service-ca\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.855337 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.855596 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.855613 5101 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.855624 5101 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.855635 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.860150 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.860789 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.860900 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-kube-api-access-8nlfv" (OuterVolumeSpecName: "kube-api-access-8nlfv") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "kube-api-access-8nlfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.864963 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.865333 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.866751 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.867006 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.867641 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.876625 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "16e791e1-266c-46d9-a6cb-d6c7e48d4df9" (UID: "16e791e1-266c-46d9-a6cb-d6c7e48d4df9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.957193 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.957291 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.957504 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-audit-policies\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.957626 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-session\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.957717 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.957764 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z56qp\" (UniqueName: \"kubernetes.io/projected/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-kube-api-access-z56qp\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.957802 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-service-ca\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.957896 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-router-certs\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.957951 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-user-template-login\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958002 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958049 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-user-template-error\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958079 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-audit-dir\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958106 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958131 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958210 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958224 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958238 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nlfv\" (UniqueName: \"kubernetes.io/projected/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-kube-api-access-8nlfv\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958248 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958259 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958270 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958287 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958297 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958309 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958322 5101 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/16e791e1-266c-46d9-a6cb-d6c7e48d4df9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.958238 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-audit-dir\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.959300 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.959475 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-service-ca\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.959491 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-audit-policies\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.959553 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.962323 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.962779 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-router-certs\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.963384 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.963788 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-user-template-login\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.963923 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.963959 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-user-template-error\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.963909 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-session\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.964398 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:57 crc kubenswrapper[5101]: I0122 09:57:57.975670 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z56qp\" (UniqueName: \"kubernetes.io/projected/cc0d5118-521d-40e5-ba21-4b01fa5a8aec-kube-api-access-z56qp\") pod \"oauth-openshift-6ddcc6c8c9-qkszr\" (UID: \"cc0d5118-521d-40e5-ba21-4b01fa5a8aec\") " pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:58 crc kubenswrapper[5101]: I0122 09:57:58.101841 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:58 crc kubenswrapper[5101]: I0122 09:57:58.506149 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr"] Jan 22 09:57:58 crc kubenswrapper[5101]: I0122 09:57:58.691649 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" event={"ID":"cc0d5118-521d-40e5-ba21-4b01fa5a8aec","Type":"ContainerStarted","Data":"0e3e6d24c3dba8ad95d798f391693fa17d62ed86aeafb841395e1911e19be3c7"} Jan 22 09:57:58 crc kubenswrapper[5101]: I0122 09:57:58.694339 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" Jan 22 09:57:58 crc kubenswrapper[5101]: I0122 09:57:58.694366 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-7gkpq" event={"ID":"16e791e1-266c-46d9-a6cb-d6c7e48d4df9","Type":"ContainerDied","Data":"33d509e94ecb99fe74dff1726f53fd0a7bef2a9631981ac2c11ca991e435f8f7"} Jan 22 09:57:58 crc kubenswrapper[5101]: I0122 09:57:58.694476 5101 scope.go:117] "RemoveContainer" containerID="79480be7d0af0ff9be6d8ea5c0ccfc0f84e19f378235c9269038a674b3002cbe" Jan 22 09:57:58 crc kubenswrapper[5101]: I0122 09:57:58.722972 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-7gkpq"] Jan 22 09:57:58 crc kubenswrapper[5101]: I0122 09:57:58.731222 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-7gkpq"] Jan 22 09:57:59 crc kubenswrapper[5101]: I0122 09:57:59.708603 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" event={"ID":"cc0d5118-521d-40e5-ba21-4b01fa5a8aec","Type":"ContainerStarted","Data":"a615d7590ae6ab2afd284c6c4cb7c6cb0e47245fddbec9e818905ad84380c6da"} Jan 22 09:57:59 crc kubenswrapper[5101]: I0122 09:57:59.709155 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:59 crc kubenswrapper[5101]: I0122 09:57:59.715297 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" Jan 22 09:57:59 crc kubenswrapper[5101]: I0122 09:57:59.760718 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6ddcc6c8c9-qkszr" podStartSLOduration=27.760698253 podStartE2EDuration="27.760698253s" podCreationTimestamp="2026-01-22 09:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:57:59.737089803 +0000 UTC m=+352.180720080" watchObservedRunningTime="2026-01-22 09:57:59.760698253 +0000 UTC m=+352.204328520" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.114915 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wdm9w"] Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.121589 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.124665 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.125646 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wdm9w"] Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.212604 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484598-dd5dj"] Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.218200 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484598-dd5dj" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.221094 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.221535 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.222005 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mc72n\"" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.222613 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484598-dd5dj"] Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.296470 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrx65\" (UniqueName: \"kubernetes.io/projected/b48dabf0-585e-4cee-974b-e44576af29c1-kube-api-access-qrx65\") pod \"certified-operators-wdm9w\" (UID: \"b48dabf0-585e-4cee-974b-e44576af29c1\") " pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.296542 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48dabf0-585e-4cee-974b-e44576af29c1-utilities\") pod \"certified-operators-wdm9w\" (UID: \"b48dabf0-585e-4cee-974b-e44576af29c1\") " pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.296717 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48dabf0-585e-4cee-974b-e44576af29c1-catalog-content\") pod \"certified-operators-wdm9w\" (UID: \"b48dabf0-585e-4cee-974b-e44576af29c1\") " pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.322514 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ctwhl"] Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.328994 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.330951 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.336900 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ctwhl"] Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.398074 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qrx65\" (UniqueName: \"kubernetes.io/projected/b48dabf0-585e-4cee-974b-e44576af29c1-kube-api-access-qrx65\") pod \"certified-operators-wdm9w\" (UID: \"b48dabf0-585e-4cee-974b-e44576af29c1\") " pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.398151 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48dabf0-585e-4cee-974b-e44576af29c1-utilities\") pod \"certified-operators-wdm9w\" (UID: \"b48dabf0-585e-4cee-974b-e44576af29c1\") " pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.398179 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lv9\" (UniqueName: \"kubernetes.io/projected/608a5f84-3a25-48f2-ab78-b23b0b6fc9b8-kube-api-access-p5lv9\") pod \"auto-csr-approver-29484598-dd5dj\" (UID: \"608a5f84-3a25-48f2-ab78-b23b0b6fc9b8\") " pod="openshift-infra/auto-csr-approver-29484598-dd5dj" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.398218 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48dabf0-585e-4cee-974b-e44576af29c1-catalog-content\") pod \"certified-operators-wdm9w\" (UID: \"b48dabf0-585e-4cee-974b-e44576af29c1\") " pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.398846 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b48dabf0-585e-4cee-974b-e44576af29c1-catalog-content\") pod \"certified-operators-wdm9w\" (UID: \"b48dabf0-585e-4cee-974b-e44576af29c1\") " pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.399071 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b48dabf0-585e-4cee-974b-e44576af29c1-utilities\") pod \"certified-operators-wdm9w\" (UID: \"b48dabf0-585e-4cee-974b-e44576af29c1\") " pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.421761 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrx65\" (UniqueName: \"kubernetes.io/projected/b48dabf0-585e-4cee-974b-e44576af29c1-kube-api-access-qrx65\") pod \"certified-operators-wdm9w\" (UID: \"b48dabf0-585e-4cee-974b-e44576af29c1\") " pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.438593 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.499773 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476e9a57-b53c-4d37-9879-68f4fe63bf6a-catalog-content\") pod \"redhat-operators-ctwhl\" (UID: \"476e9a57-b53c-4d37-9879-68f4fe63bf6a\") " pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.499944 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p5lv9\" (UniqueName: \"kubernetes.io/projected/608a5f84-3a25-48f2-ab78-b23b0b6fc9b8-kube-api-access-p5lv9\") pod \"auto-csr-approver-29484598-dd5dj\" (UID: \"608a5f84-3a25-48f2-ab78-b23b0b6fc9b8\") " pod="openshift-infra/auto-csr-approver-29484598-dd5dj" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.500059 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn4xd\" (UniqueName: \"kubernetes.io/projected/476e9a57-b53c-4d37-9879-68f4fe63bf6a-kube-api-access-dn4xd\") pod \"redhat-operators-ctwhl\" (UID: \"476e9a57-b53c-4d37-9879-68f4fe63bf6a\") " pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.500269 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476e9a57-b53c-4d37-9879-68f4fe63bf6a-utilities\") pod \"redhat-operators-ctwhl\" (UID: \"476e9a57-b53c-4d37-9879-68f4fe63bf6a\") " pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.519249 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5lv9\" (UniqueName: \"kubernetes.io/projected/608a5f84-3a25-48f2-ab78-b23b0b6fc9b8-kube-api-access-p5lv9\") pod \"auto-csr-approver-29484598-dd5dj\" (UID: \"608a5f84-3a25-48f2-ab78-b23b0b6fc9b8\") " pod="openshift-infra/auto-csr-approver-29484598-dd5dj" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.533163 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484598-dd5dj" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.537814 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16e791e1-266c-46d9-a6cb-d6c7e48d4df9" path="/var/lib/kubelet/pods/16e791e1-266c-46d9-a6cb-d6c7e48d4df9/volumes" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.607546 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476e9a57-b53c-4d37-9879-68f4fe63bf6a-utilities\") pod \"redhat-operators-ctwhl\" (UID: \"476e9a57-b53c-4d37-9879-68f4fe63bf6a\") " pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.608200 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476e9a57-b53c-4d37-9879-68f4fe63bf6a-catalog-content\") pod \"redhat-operators-ctwhl\" (UID: \"476e9a57-b53c-4d37-9879-68f4fe63bf6a\") " pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.608286 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dn4xd\" (UniqueName: \"kubernetes.io/projected/476e9a57-b53c-4d37-9879-68f4fe63bf6a-kube-api-access-dn4xd\") pod \"redhat-operators-ctwhl\" (UID: \"476e9a57-b53c-4d37-9879-68f4fe63bf6a\") " pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.608297 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476e9a57-b53c-4d37-9879-68f4fe63bf6a-utilities\") pod \"redhat-operators-ctwhl\" (UID: \"476e9a57-b53c-4d37-9879-68f4fe63bf6a\") " pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.608736 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476e9a57-b53c-4d37-9879-68f4fe63bf6a-catalog-content\") pod \"redhat-operators-ctwhl\" (UID: \"476e9a57-b53c-4d37-9879-68f4fe63bf6a\") " pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.633928 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn4xd\" (UniqueName: \"kubernetes.io/projected/476e9a57-b53c-4d37-9879-68f4fe63bf6a-kube-api-access-dn4xd\") pod \"redhat-operators-ctwhl\" (UID: \"476e9a57-b53c-4d37-9879-68f4fe63bf6a\") " pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.647072 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:00 crc kubenswrapper[5101]: I0122 09:58:00.910167 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wdm9w"] Jan 22 09:58:00 crc kubenswrapper[5101]: W0122 09:58:00.921297 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb48dabf0_585e_4cee_974b_e44576af29c1.slice/crio-aef4a5f98e4f0bf5b0705ec7d51386bcd2af087ad19164ac426052897c7a8e42 WatchSource:0}: Error finding container aef4a5f98e4f0bf5b0705ec7d51386bcd2af087ad19164ac426052897c7a8e42: Status 404 returned error can't find the container with id aef4a5f98e4f0bf5b0705ec7d51386bcd2af087ad19164ac426052897c7a8e42 Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.029547 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484598-dd5dj"] Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.093708 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ctwhl"] Jan 22 09:58:01 crc kubenswrapper[5101]: W0122 09:58:01.135599 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod476e9a57_b53c_4d37_9879_68f4fe63bf6a.slice/crio-59b4b5cbf932bfaf7fd895e32df03322ab06e6c723c29ddbff231ac67d9db0a7 WatchSource:0}: Error finding container 59b4b5cbf932bfaf7fd895e32df03322ab06e6c723c29ddbff231ac67d9db0a7: Status 404 returned error can't find the container with id 59b4b5cbf932bfaf7fd895e32df03322ab06e6c723c29ddbff231ac67d9db0a7 Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.718924 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x7bct"] Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.724326 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.727093 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.730057 5101 generic.go:358] "Generic (PLEG): container finished" podID="476e9a57-b53c-4d37-9879-68f4fe63bf6a" containerID="324d4fc02a1a1b12ff7a2f32654808ef63a0ea3a01021b40a2f8840e50803f27" exitCode=0 Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.730277 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctwhl" event={"ID":"476e9a57-b53c-4d37-9879-68f4fe63bf6a","Type":"ContainerDied","Data":"324d4fc02a1a1b12ff7a2f32654808ef63a0ea3a01021b40a2f8840e50803f27"} Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.730319 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctwhl" event={"ID":"476e9a57-b53c-4d37-9879-68f4fe63bf6a","Type":"ContainerStarted","Data":"59b4b5cbf932bfaf7fd895e32df03322ab06e6c723c29ddbff231ac67d9db0a7"} Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.730802 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7bct"] Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.731650 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484598-dd5dj" event={"ID":"608a5f84-3a25-48f2-ab78-b23b0b6fc9b8","Type":"ContainerStarted","Data":"d8edfad48c9571baadad7dbd19c24336153e303f9b5f5d60d54b7543c8fedb72"} Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.733818 5101 generic.go:358] "Generic (PLEG): container finished" podID="b48dabf0-585e-4cee-974b-e44576af29c1" containerID="c0f220fd5a638d39d027b2e24a43415508059dab89d4ad62d99670677820dce1" exitCode=0 Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.735048 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdm9w" event={"ID":"b48dabf0-585e-4cee-974b-e44576af29c1","Type":"ContainerDied","Data":"c0f220fd5a638d39d027b2e24a43415508059dab89d4ad62d99670677820dce1"} Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.735080 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdm9w" event={"ID":"b48dabf0-585e-4cee-974b-e44576af29c1","Type":"ContainerStarted","Data":"aef4a5f98e4f0bf5b0705ec7d51386bcd2af087ad19164ac426052897c7a8e42"} Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.824666 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdg7c\" (UniqueName: \"kubernetes.io/projected/dea260e9-7622-4e7f-96fb-1f09d50e7b31-kube-api-access-kdg7c\") pod \"community-operators-x7bct\" (UID: \"dea260e9-7622-4e7f-96fb-1f09d50e7b31\") " pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.824783 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea260e9-7622-4e7f-96fb-1f09d50e7b31-catalog-content\") pod \"community-operators-x7bct\" (UID: \"dea260e9-7622-4e7f-96fb-1f09d50e7b31\") " pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.824998 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea260e9-7622-4e7f-96fb-1f09d50e7b31-utilities\") pod \"community-operators-x7bct\" (UID: \"dea260e9-7622-4e7f-96fb-1f09d50e7b31\") " pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.926956 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kdg7c\" (UniqueName: \"kubernetes.io/projected/dea260e9-7622-4e7f-96fb-1f09d50e7b31-kube-api-access-kdg7c\") pod \"community-operators-x7bct\" (UID: \"dea260e9-7622-4e7f-96fb-1f09d50e7b31\") " pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.927046 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea260e9-7622-4e7f-96fb-1f09d50e7b31-catalog-content\") pod \"community-operators-x7bct\" (UID: \"dea260e9-7622-4e7f-96fb-1f09d50e7b31\") " pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.927119 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea260e9-7622-4e7f-96fb-1f09d50e7b31-utilities\") pod \"community-operators-x7bct\" (UID: \"dea260e9-7622-4e7f-96fb-1f09d50e7b31\") " pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.927733 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea260e9-7622-4e7f-96fb-1f09d50e7b31-utilities\") pod \"community-operators-x7bct\" (UID: \"dea260e9-7622-4e7f-96fb-1f09d50e7b31\") " pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.928181 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea260e9-7622-4e7f-96fb-1f09d50e7b31-catalog-content\") pod \"community-operators-x7bct\" (UID: \"dea260e9-7622-4e7f-96fb-1f09d50e7b31\") " pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:01 crc kubenswrapper[5101]: I0122 09:58:01.948568 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdg7c\" (UniqueName: \"kubernetes.io/projected/dea260e9-7622-4e7f-96fb-1f09d50e7b31-kube-api-access-kdg7c\") pod \"community-operators-x7bct\" (UID: \"dea260e9-7622-4e7f-96fb-1f09d50e7b31\") " pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:02 crc kubenswrapper[5101]: I0122 09:58:02.067357 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:02 crc kubenswrapper[5101]: I0122 09:58:02.541738 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7bct"] Jan 22 09:58:02 crc kubenswrapper[5101]: W0122 09:58:02.543775 5101 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddea260e9_7622_4e7f_96fb_1f09d50e7b31.slice/crio-d178f55cb34ad036e1d2dfa4b5d2748e94156490399a1f1062b50ca3f72cb2f5 WatchSource:0}: Error finding container d178f55cb34ad036e1d2dfa4b5d2748e94156490399a1f1062b50ca3f72cb2f5: Status 404 returned error can't find the container with id d178f55cb34ad036e1d2dfa4b5d2748e94156490399a1f1062b50ca3f72cb2f5 Jan 22 09:58:02 crc kubenswrapper[5101]: I0122 09:58:02.745722 5101 generic.go:358] "Generic (PLEG): container finished" podID="b48dabf0-585e-4cee-974b-e44576af29c1" containerID="480093b67e2240f76569669e16c892740d52a017fbd0c6fc4a6aec53b5e424dc" exitCode=0 Jan 22 09:58:02 crc kubenswrapper[5101]: I0122 09:58:02.745940 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdm9w" event={"ID":"b48dabf0-585e-4cee-974b-e44576af29c1","Type":"ContainerDied","Data":"480093b67e2240f76569669e16c892740d52a017fbd0c6fc4a6aec53b5e424dc"} Jan 22 09:58:02 crc kubenswrapper[5101]: I0122 09:58:02.758974 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7bct" event={"ID":"dea260e9-7622-4e7f-96fb-1f09d50e7b31","Type":"ContainerStarted","Data":"f0a1c747f73929ec33b9f48b0fbd3f6bd88dee2dda50a383e5b398637cb199e2"} Jan 22 09:58:02 crc kubenswrapper[5101]: I0122 09:58:02.759035 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7bct" event={"ID":"dea260e9-7622-4e7f-96fb-1f09d50e7b31","Type":"ContainerStarted","Data":"d178f55cb34ad036e1d2dfa4b5d2748e94156490399a1f1062b50ca3f72cb2f5"} Jan 22 09:58:02 crc kubenswrapper[5101]: I0122 09:58:02.921898 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-scvj8"] Jan 22 09:58:02 crc kubenswrapper[5101]: I0122 09:58:02.986312 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-scvj8"] Jan 22 09:58:02 crc kubenswrapper[5101]: I0122 09:58:02.986680 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:02 crc kubenswrapper[5101]: I0122 09:58:02.993380 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.157396 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077c67a1-2718-40eb-ae68-3b4fcbeb444e-utilities\") pod \"redhat-marketplace-scvj8\" (UID: \"077c67a1-2718-40eb-ae68-3b4fcbeb444e\") " pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.157521 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077c67a1-2718-40eb-ae68-3b4fcbeb444e-catalog-content\") pod \"redhat-marketplace-scvj8\" (UID: \"077c67a1-2718-40eb-ae68-3b4fcbeb444e\") " pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.157716 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68pvz\" (UniqueName: \"kubernetes.io/projected/077c67a1-2718-40eb-ae68-3b4fcbeb444e-kube-api-access-68pvz\") pod \"redhat-marketplace-scvj8\" (UID: \"077c67a1-2718-40eb-ae68-3b4fcbeb444e\") " pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.258398 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68pvz\" (UniqueName: \"kubernetes.io/projected/077c67a1-2718-40eb-ae68-3b4fcbeb444e-kube-api-access-68pvz\") pod \"redhat-marketplace-scvj8\" (UID: \"077c67a1-2718-40eb-ae68-3b4fcbeb444e\") " pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.258478 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077c67a1-2718-40eb-ae68-3b4fcbeb444e-utilities\") pod \"redhat-marketplace-scvj8\" (UID: \"077c67a1-2718-40eb-ae68-3b4fcbeb444e\") " pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.258660 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077c67a1-2718-40eb-ae68-3b4fcbeb444e-catalog-content\") pod \"redhat-marketplace-scvj8\" (UID: \"077c67a1-2718-40eb-ae68-3b4fcbeb444e\") " pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.259133 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077c67a1-2718-40eb-ae68-3b4fcbeb444e-utilities\") pod \"redhat-marketplace-scvj8\" (UID: \"077c67a1-2718-40eb-ae68-3b4fcbeb444e\") " pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.259307 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077c67a1-2718-40eb-ae68-3b4fcbeb444e-catalog-content\") pod \"redhat-marketplace-scvj8\" (UID: \"077c67a1-2718-40eb-ae68-3b4fcbeb444e\") " pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.285628 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-68pvz\" (UniqueName: \"kubernetes.io/projected/077c67a1-2718-40eb-ae68-3b4fcbeb444e-kube-api-access-68pvz\") pod \"redhat-marketplace-scvj8\" (UID: \"077c67a1-2718-40eb-ae68-3b4fcbeb444e\") " pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.318456 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.772148 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdm9w" event={"ID":"b48dabf0-585e-4cee-974b-e44576af29c1","Type":"ContainerStarted","Data":"95d1e4b967e38ecb120fed9b4081bb48cd1734d0e84cacce373415b7c83f2bec"} Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.773803 5101 generic.go:358] "Generic (PLEG): container finished" podID="dea260e9-7622-4e7f-96fb-1f09d50e7b31" containerID="f0a1c747f73929ec33b9f48b0fbd3f6bd88dee2dda50a383e5b398637cb199e2" exitCode=0 Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.773892 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7bct" event={"ID":"dea260e9-7622-4e7f-96fb-1f09d50e7b31","Type":"ContainerDied","Data":"f0a1c747f73929ec33b9f48b0fbd3f6bd88dee2dda50a383e5b398637cb199e2"} Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.788693 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-scvj8"] Jan 22 09:58:03 crc kubenswrapper[5101]: I0122 09:58:03.797794 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wdm9w" podStartSLOduration=3.200656471 podStartE2EDuration="3.797775965s" podCreationTimestamp="2026-01-22 09:58:00 +0000 UTC" firstStartedPulling="2026-01-22 09:58:01.735227721 +0000 UTC m=+354.178857988" lastFinishedPulling="2026-01-22 09:58:02.332347215 +0000 UTC m=+354.775977482" observedRunningTime="2026-01-22 09:58:03.795510957 +0000 UTC m=+356.239141224" watchObservedRunningTime="2026-01-22 09:58:03.797775965 +0000 UTC m=+356.241406222" Jan 22 09:58:04 crc kubenswrapper[5101]: I0122 09:58:04.786786 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scvj8" event={"ID":"077c67a1-2718-40eb-ae68-3b4fcbeb444e","Type":"ContainerStarted","Data":"1cb43b6e731b82280232f331ebb25c17c051a0fc20efe0d580c8b66bc7a2b7e8"} Jan 22 09:58:05 crc kubenswrapper[5101]: I0122 09:58:05.499115 5101 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-wfjdl" Jan 22 09:58:05 crc kubenswrapper[5101]: I0122 09:58:05.519976 5101 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-wfjdl" Jan 22 09:58:05 crc kubenswrapper[5101]: I0122 09:58:05.791980 5101 generic.go:358] "Generic (PLEG): container finished" podID="608a5f84-3a25-48f2-ab78-b23b0b6fc9b8" containerID="998fffbb4957a5e750aadf8559311973a75fefa8639b93d5b7514fd24a402ff1" exitCode=0 Jan 22 09:58:05 crc kubenswrapper[5101]: I0122 09:58:05.792658 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484598-dd5dj" event={"ID":"608a5f84-3a25-48f2-ab78-b23b0b6fc9b8","Type":"ContainerDied","Data":"998fffbb4957a5e750aadf8559311973a75fefa8639b93d5b7514fd24a402ff1"} Jan 22 09:58:05 crc kubenswrapper[5101]: I0122 09:58:05.794744 5101 generic.go:358] "Generic (PLEG): container finished" podID="077c67a1-2718-40eb-ae68-3b4fcbeb444e" containerID="b86104e6aad84c45fde23f88ace7ba6c4adfc942842eeb1fbf415c79908e862b" exitCode=0 Jan 22 09:58:05 crc kubenswrapper[5101]: I0122 09:58:05.794835 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scvj8" event={"ID":"077c67a1-2718-40eb-ae68-3b4fcbeb444e","Type":"ContainerDied","Data":"b86104e6aad84c45fde23f88ace7ba6c4adfc942842eeb1fbf415c79908e862b"} Jan 22 09:58:05 crc kubenswrapper[5101]: I0122 09:58:05.797490 5101 generic.go:358] "Generic (PLEG): container finished" podID="dea260e9-7622-4e7f-96fb-1f09d50e7b31" containerID="81ded3329902b3f831cc33626326a67f2e9ab60a7f24efc9b1e8e88dfbbbb089" exitCode=0 Jan 22 09:58:05 crc kubenswrapper[5101]: I0122 09:58:05.797665 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7bct" event={"ID":"dea260e9-7622-4e7f-96fb-1f09d50e7b31","Type":"ContainerDied","Data":"81ded3329902b3f831cc33626326a67f2e9ab60a7f24efc9b1e8e88dfbbbb089"} Jan 22 09:58:05 crc kubenswrapper[5101]: I0122 09:58:05.800849 5101 generic.go:358] "Generic (PLEG): container finished" podID="476e9a57-b53c-4d37-9879-68f4fe63bf6a" containerID="0b3f934b652d76dddc7a56d757a7b8ae06e42319586e4db98fdcd75fa3312340" exitCode=0 Jan 22 09:58:05 crc kubenswrapper[5101]: I0122 09:58:05.800966 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctwhl" event={"ID":"476e9a57-b53c-4d37-9879-68f4fe63bf6a","Type":"ContainerDied","Data":"0b3f934b652d76dddc7a56d757a7b8ae06e42319586e4db98fdcd75fa3312340"} Jan 22 09:58:06 crc kubenswrapper[5101]: I0122 09:58:06.522054 5101 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-21 09:53:05 +0000 UTC" deadline="2026-02-12 18:55:27.855688851 +0000 UTC" Jan 22 09:58:06 crc kubenswrapper[5101]: I0122 09:58:06.522296 5101 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="512h57m21.333396314s" Jan 22 09:58:06 crc kubenswrapper[5101]: I0122 09:58:06.561573 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" podUID="b182bd55-8225-4386-aa02-40b8c9358df5" containerName="registry" containerID="cri-o://743114f6f0ee4657c48650fad0701dd85d8bc16d9e13d5e3772c8eb161e867a2" gracePeriod=30 Jan 22 09:58:06 crc kubenswrapper[5101]: I0122 09:58:06.809128 5101 generic.go:358] "Generic (PLEG): container finished" podID="077c67a1-2718-40eb-ae68-3b4fcbeb444e" containerID="b5b1f998497663684269c10b31d8371ffbce6a71a9b6e0344b4aa02928ba5857" exitCode=0 Jan 22 09:58:06 crc kubenswrapper[5101]: I0122 09:58:06.809218 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scvj8" event={"ID":"077c67a1-2718-40eb-ae68-3b4fcbeb444e","Type":"ContainerDied","Data":"b5b1f998497663684269c10b31d8371ffbce6a71a9b6e0344b4aa02928ba5857"} Jan 22 09:58:06 crc kubenswrapper[5101]: I0122 09:58:06.812622 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7bct" event={"ID":"dea260e9-7622-4e7f-96fb-1f09d50e7b31","Type":"ContainerStarted","Data":"cd62ecc30209c3b2d325871b8114f7b4a676b8b0a05a958bdca8cd15ade8c8ea"} Jan 22 09:58:06 crc kubenswrapper[5101]: I0122 09:58:06.820372 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctwhl" event={"ID":"476e9a57-b53c-4d37-9879-68f4fe63bf6a","Type":"ContainerStarted","Data":"db2bacf7f6b5f597265d6d4df920dc7fb54f3fa0dd69818bcbc9b6ad73dbde03"} Jan 22 09:58:06 crc kubenswrapper[5101]: I0122 09:58:06.822436 5101 generic.go:358] "Generic (PLEG): container finished" podID="b182bd55-8225-4386-aa02-40b8c9358df5" containerID="743114f6f0ee4657c48650fad0701dd85d8bc16d9e13d5e3772c8eb161e867a2" exitCode=0 Jan 22 09:58:06 crc kubenswrapper[5101]: I0122 09:58:06.822525 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" event={"ID":"b182bd55-8225-4386-aa02-40b8c9358df5","Type":"ContainerDied","Data":"743114f6f0ee4657c48650fad0701dd85d8bc16d9e13d5e3772c8eb161e867a2"} Jan 22 09:58:06 crc kubenswrapper[5101]: I0122 09:58:06.853143 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x7bct" podStartSLOduration=3.885546772 podStartE2EDuration="5.8531219s" podCreationTimestamp="2026-01-22 09:58:01 +0000 UTC" firstStartedPulling="2026-01-22 09:58:02.759961061 +0000 UTC m=+355.203591328" lastFinishedPulling="2026-01-22 09:58:04.727536189 +0000 UTC m=+357.171166456" observedRunningTime="2026-01-22 09:58:06.850405349 +0000 UTC m=+359.294035616" watchObservedRunningTime="2026-01-22 09:58:06.8531219 +0000 UTC m=+359.296752167" Jan 22 09:58:06 crc kubenswrapper[5101]: I0122 09:58:06.879951 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ctwhl" podStartSLOduration=3.974050019 podStartE2EDuration="6.879928077s" podCreationTimestamp="2026-01-22 09:58:00 +0000 UTC" firstStartedPulling="2026-01-22 09:58:01.731307553 +0000 UTC m=+354.174937830" lastFinishedPulling="2026-01-22 09:58:04.637185621 +0000 UTC m=+357.080815888" observedRunningTime="2026-01-22 09:58:06.875562435 +0000 UTC m=+359.319192702" watchObservedRunningTime="2026-01-22 09:58:06.879928077 +0000 UTC m=+359.323558344" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.133322 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.230996 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4vvq\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-kube-api-access-b4vvq\") pod \"b182bd55-8225-4386-aa02-40b8c9358df5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.231358 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b182bd55-8225-4386-aa02-40b8c9358df5-registry-certificates\") pod \"b182bd55-8225-4386-aa02-40b8c9358df5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.231484 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b182bd55-8225-4386-aa02-40b8c9358df5-ca-trust-extracted\") pod \"b182bd55-8225-4386-aa02-40b8c9358df5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.231623 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b182bd55-8225-4386-aa02-40b8c9358df5-trusted-ca\") pod \"b182bd55-8225-4386-aa02-40b8c9358df5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.231732 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-bound-sa-token\") pod \"b182bd55-8225-4386-aa02-40b8c9358df5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.231976 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"b182bd55-8225-4386-aa02-40b8c9358df5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.233110 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-registry-tls\") pod \"b182bd55-8225-4386-aa02-40b8c9358df5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.233273 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b182bd55-8225-4386-aa02-40b8c9358df5-installation-pull-secrets\") pod \"b182bd55-8225-4386-aa02-40b8c9358df5\" (UID: \"b182bd55-8225-4386-aa02-40b8c9358df5\") " Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.232243 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b182bd55-8225-4386-aa02-40b8c9358df5-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b182bd55-8225-4386-aa02-40b8c9358df5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.232361 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b182bd55-8225-4386-aa02-40b8c9358df5-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b182bd55-8225-4386-aa02-40b8c9358df5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.241525 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b182bd55-8225-4386-aa02-40b8c9358df5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.241737 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b182bd55-8225-4386-aa02-40b8c9358df5-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b182bd55-8225-4386-aa02-40b8c9358df5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.242201 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b182bd55-8225-4386-aa02-40b8c9358df5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.247118 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "b182bd55-8225-4386-aa02-40b8c9358df5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.247914 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-kube-api-access-b4vvq" (OuterVolumeSpecName: "kube-api-access-b4vvq") pod "b182bd55-8225-4386-aa02-40b8c9358df5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5"). InnerVolumeSpecName "kube-api-access-b4vvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.251916 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b182bd55-8225-4386-aa02-40b8c9358df5-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b182bd55-8225-4386-aa02-40b8c9358df5" (UID: "b182bd55-8225-4386-aa02-40b8c9358df5"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.334378 5101 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b182bd55-8225-4386-aa02-40b8c9358df5-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.334414 5101 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.334455 5101 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.334468 5101 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b182bd55-8225-4386-aa02-40b8c9358df5-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.334478 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b4vvq\" (UniqueName: \"kubernetes.io/projected/b182bd55-8225-4386-aa02-40b8c9358df5-kube-api-access-b4vvq\") on node \"crc\" DevicePath \"\"" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.334488 5101 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b182bd55-8225-4386-aa02-40b8c9358df5-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.334499 5101 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b182bd55-8225-4386-aa02-40b8c9358df5-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.375112 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484598-dd5dj" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.435986 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5lv9\" (UniqueName: \"kubernetes.io/projected/608a5f84-3a25-48f2-ab78-b23b0b6fc9b8-kube-api-access-p5lv9\") pod \"608a5f84-3a25-48f2-ab78-b23b0b6fc9b8\" (UID: \"608a5f84-3a25-48f2-ab78-b23b0b6fc9b8\") " Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.440563 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608a5f84-3a25-48f2-ab78-b23b0b6fc9b8-kube-api-access-p5lv9" (OuterVolumeSpecName: "kube-api-access-p5lv9") pod "608a5f84-3a25-48f2-ab78-b23b0b6fc9b8" (UID: "608a5f84-3a25-48f2-ab78-b23b0b6fc9b8"). InnerVolumeSpecName "kube-api-access-p5lv9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.537341 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p5lv9\" (UniqueName: \"kubernetes.io/projected/608a5f84-3a25-48f2-ab78-b23b0b6fc9b8-kube-api-access-p5lv9\") on node \"crc\" DevicePath \"\"" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.842149 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scvj8" event={"ID":"077c67a1-2718-40eb-ae68-3b4fcbeb444e","Type":"ContainerStarted","Data":"3710eefa000bdd3462f70b472332d3f2cb4521cb5053a201248c369ad5b67278"} Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.843936 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" event={"ID":"b182bd55-8225-4386-aa02-40b8c9358df5","Type":"ContainerDied","Data":"a465d5da2511aec60941cdfc504ba30ae4d06c359286367bd2dc500f5bc4d81d"} Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.843980 5101 scope.go:117] "RemoveContainer" containerID="743114f6f0ee4657c48650fad0701dd85d8bc16d9e13d5e3772c8eb161e867a2" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.844031 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.847584 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484598-dd5dj" event={"ID":"608a5f84-3a25-48f2-ab78-b23b0b6fc9b8","Type":"ContainerDied","Data":"d8edfad48c9571baadad7dbd19c24336153e303f9b5f5d60d54b7543c8fedb72"} Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.847628 5101 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8edfad48c9571baadad7dbd19c24336153e303f9b5f5d60d54b7543c8fedb72" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.847608 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484598-dd5dj" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.868233 5101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-scvj8" podStartSLOduration=5.375518177 podStartE2EDuration="5.868214751s" podCreationTimestamp="2026-01-22 09:58:02 +0000 UTC" firstStartedPulling="2026-01-22 09:58:05.795673405 +0000 UTC m=+358.239303672" lastFinishedPulling="2026-01-22 09:58:06.288369979 +0000 UTC m=+358.732000246" observedRunningTime="2026-01-22 09:58:07.867202521 +0000 UTC m=+360.310832788" watchObservedRunningTime="2026-01-22 09:58:07.868214751 +0000 UTC m=+360.311845018" Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.904395 5101 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-7pcd5"] Jan 22 09:58:07 crc kubenswrapper[5101]: I0122 09:58:07.909740 5101 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-7pcd5"] Jan 22 09:58:08 crc kubenswrapper[5101]: I0122 09:58:08.535887 5101 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b182bd55-8225-4386-aa02-40b8c9358df5" path="/var/lib/kubelet/pods/b182bd55-8225-4386-aa02-40b8c9358df5/volumes" Jan 22 09:58:10 crc kubenswrapper[5101]: I0122 09:58:10.489281 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:10 crc kubenswrapper[5101]: I0122 09:58:10.489686 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:10 crc kubenswrapper[5101]: I0122 09:58:10.538497 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:10 crc kubenswrapper[5101]: I0122 09:58:10.648034 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:10 crc kubenswrapper[5101]: I0122 09:58:10.648358 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:10 crc kubenswrapper[5101]: I0122 09:58:10.900023 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wdm9w" Jan 22 09:58:11 crc kubenswrapper[5101]: I0122 09:58:11.685371 5101 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ctwhl" podUID="476e9a57-b53c-4d37-9879-68f4fe63bf6a" containerName="registry-server" probeResult="failure" output=< Jan 22 09:58:11 crc kubenswrapper[5101]: timeout: failed to connect service ":50051" within 1s Jan 22 09:58:11 crc kubenswrapper[5101]: > Jan 22 09:58:12 crc kubenswrapper[5101]: I0122 09:58:12.067988 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:12 crc kubenswrapper[5101]: I0122 09:58:12.068382 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:12 crc kubenswrapper[5101]: I0122 09:58:12.085332 5101 patch_prober.go:28] interesting pod/image-registry-66587d64c8-7pcd5 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.33:5000/healthz\": context deadline exceeded" start-of-body= Jan 22 09:58:12 crc kubenswrapper[5101]: I0122 09:58:12.085440 5101 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-7pcd5" podUID="b182bd55-8225-4386-aa02-40b8c9358df5" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.33:5000/healthz\": context deadline exceeded" Jan 22 09:58:12 crc kubenswrapper[5101]: I0122 09:58:12.118109 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:12 crc kubenswrapper[5101]: I0122 09:58:12.941992 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x7bct" Jan 22 09:58:13 crc kubenswrapper[5101]: I0122 09:58:13.318924 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:13 crc kubenswrapper[5101]: I0122 09:58:13.319029 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:13 crc kubenswrapper[5101]: I0122 09:58:13.354756 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:13 crc kubenswrapper[5101]: I0122 09:58:13.926829 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-scvj8" Jan 22 09:58:20 crc kubenswrapper[5101]: I0122 09:58:20.685391 5101 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:58:20 crc kubenswrapper[5101]: I0122 09:58:20.736336 5101 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ctwhl" Jan 22 09:59:43 crc kubenswrapper[5101]: I0122 09:59:43.791304 5101 patch_prober.go:28] interesting pod/machine-config-daemon-m45mk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:59:43 crc kubenswrapper[5101]: I0122 09:59:43.792026 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" podUID="8450e755-f74e-492f-8007-24e3410a8926" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.147677 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484600-wndl6"] Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.151486 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b182bd55-8225-4386-aa02-40b8c9358df5" containerName="registry" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.151511 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="b182bd55-8225-4386-aa02-40b8c9358df5" containerName="registry" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.151529 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="608a5f84-3a25-48f2-ab78-b23b0b6fc9b8" containerName="oc" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.151535 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="608a5f84-3a25-48f2-ab78-b23b0b6fc9b8" containerName="oc" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.152302 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="b182bd55-8225-4386-aa02-40b8c9358df5" containerName="registry" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.152332 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="608a5f84-3a25-48f2-ab78-b23b0b6fc9b8" containerName="oc" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.573055 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484600-wndl6" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.575898 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.576910 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mc72n\"" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.577043 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.582791 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484600-wndl6"] Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.582831 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq"] Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.630075 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpkfh\" (UniqueName: \"kubernetes.io/projected/20417057-198a-4454-92a8-097d4ee62e5e-kube-api-access-dpkfh\") pod \"auto-csr-approver-29484600-wndl6\" (UID: \"20417057-198a-4454-92a8-097d4ee62e5e\") " pod="openshift-infra/auto-csr-approver-29484600-wndl6" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.732139 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dpkfh\" (UniqueName: \"kubernetes.io/projected/20417057-198a-4454-92a8-097d4ee62e5e-kube-api-access-dpkfh\") pod \"auto-csr-approver-29484600-wndl6\" (UID: \"20417057-198a-4454-92a8-097d4ee62e5e\") " pod="openshift-infra/auto-csr-approver-29484600-wndl6" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.752881 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpkfh\" (UniqueName: \"kubernetes.io/projected/20417057-198a-4454-92a8-097d4ee62e5e-kube-api-access-dpkfh\") pod \"auto-csr-approver-29484600-wndl6\" (UID: \"20417057-198a-4454-92a8-097d4ee62e5e\") " pod="openshift-infra/auto-csr-approver-29484600-wndl6" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.894218 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484600-wndl6" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.986114 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq"] Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.986349 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.989142 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 10:00:00 crc kubenswrapper[5101]: I0122 10:00:00.989227 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.097833 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484600-wndl6"] Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.140339 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w6hl\" (UniqueName: \"kubernetes.io/projected/82afd690-92db-4b83-aca8-d76a830c8985-kube-api-access-6w6hl\") pod \"collect-profiles-29484600-nbnkq\" (UID: \"82afd690-92db-4b83-aca8-d76a830c8985\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.140670 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82afd690-92db-4b83-aca8-d76a830c8985-config-volume\") pod \"collect-profiles-29484600-nbnkq\" (UID: \"82afd690-92db-4b83-aca8-d76a830c8985\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.140768 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82afd690-92db-4b83-aca8-d76a830c8985-secret-volume\") pod \"collect-profiles-29484600-nbnkq\" (UID: \"82afd690-92db-4b83-aca8-d76a830c8985\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.242534 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82afd690-92db-4b83-aca8-d76a830c8985-config-volume\") pod \"collect-profiles-29484600-nbnkq\" (UID: \"82afd690-92db-4b83-aca8-d76a830c8985\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.242603 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82afd690-92db-4b83-aca8-d76a830c8985-secret-volume\") pod \"collect-profiles-29484600-nbnkq\" (UID: \"82afd690-92db-4b83-aca8-d76a830c8985\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.243572 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6w6hl\" (UniqueName: \"kubernetes.io/projected/82afd690-92db-4b83-aca8-d76a830c8985-kube-api-access-6w6hl\") pod \"collect-profiles-29484600-nbnkq\" (UID: \"82afd690-92db-4b83-aca8-d76a830c8985\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.243719 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82afd690-92db-4b83-aca8-d76a830c8985-config-volume\") pod \"collect-profiles-29484600-nbnkq\" (UID: \"82afd690-92db-4b83-aca8-d76a830c8985\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.249084 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82afd690-92db-4b83-aca8-d76a830c8985-secret-volume\") pod \"collect-profiles-29484600-nbnkq\" (UID: \"82afd690-92db-4b83-aca8-d76a830c8985\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.259924 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w6hl\" (UniqueName: \"kubernetes.io/projected/82afd690-92db-4b83-aca8-d76a830c8985-kube-api-access-6w6hl\") pod \"collect-profiles-29484600-nbnkq\" (UID: \"82afd690-92db-4b83-aca8-d76a830c8985\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.310576 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.502460 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484600-wndl6" event={"ID":"20417057-198a-4454-92a8-097d4ee62e5e","Type":"ContainerStarted","Data":"0e650f3fff9051e5c361415add6cfec7e570136f4cf3af77fbb138b03eee7090"} Jan 22 10:00:01 crc kubenswrapper[5101]: I0122 10:00:01.510767 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq"] Jan 22 10:00:02 crc kubenswrapper[5101]: I0122 10:00:02.509929 5101 generic.go:358] "Generic (PLEG): container finished" podID="82afd690-92db-4b83-aca8-d76a830c8985" containerID="d6441b30962b30f36e4b13cba93f5521bddb1f2962786aa236217928dd11a909" exitCode=0 Jan 22 10:00:02 crc kubenswrapper[5101]: I0122 10:00:02.509981 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" event={"ID":"82afd690-92db-4b83-aca8-d76a830c8985","Type":"ContainerDied","Data":"d6441b30962b30f36e4b13cba93f5521bddb1f2962786aa236217928dd11a909"} Jan 22 10:00:02 crc kubenswrapper[5101]: I0122 10:00:02.510354 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" event={"ID":"82afd690-92db-4b83-aca8-d76a830c8985","Type":"ContainerStarted","Data":"b9a662490daefef6404d2966c0242b77336abb2084cb6b3fd4401cfafda21de4"} Jan 22 10:00:03 crc kubenswrapper[5101]: I0122 10:00:03.738543 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:03 crc kubenswrapper[5101]: I0122 10:00:03.780496 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82afd690-92db-4b83-aca8-d76a830c8985-secret-volume\") pod \"82afd690-92db-4b83-aca8-d76a830c8985\" (UID: \"82afd690-92db-4b83-aca8-d76a830c8985\") " Jan 22 10:00:03 crc kubenswrapper[5101]: I0122 10:00:03.780564 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82afd690-92db-4b83-aca8-d76a830c8985-config-volume\") pod \"82afd690-92db-4b83-aca8-d76a830c8985\" (UID: \"82afd690-92db-4b83-aca8-d76a830c8985\") " Jan 22 10:00:03 crc kubenswrapper[5101]: I0122 10:00:03.780600 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w6hl\" (UniqueName: \"kubernetes.io/projected/82afd690-92db-4b83-aca8-d76a830c8985-kube-api-access-6w6hl\") pod \"82afd690-92db-4b83-aca8-d76a830c8985\" (UID: \"82afd690-92db-4b83-aca8-d76a830c8985\") " Jan 22 10:00:03 crc kubenswrapper[5101]: I0122 10:00:03.782731 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82afd690-92db-4b83-aca8-d76a830c8985-config-volume" (OuterVolumeSpecName: "config-volume") pod "82afd690-92db-4b83-aca8-d76a830c8985" (UID: "82afd690-92db-4b83-aca8-d76a830c8985"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 10:00:03 crc kubenswrapper[5101]: I0122 10:00:03.786520 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82afd690-92db-4b83-aca8-d76a830c8985-kube-api-access-6w6hl" (OuterVolumeSpecName: "kube-api-access-6w6hl") pod "82afd690-92db-4b83-aca8-d76a830c8985" (UID: "82afd690-92db-4b83-aca8-d76a830c8985"). InnerVolumeSpecName "kube-api-access-6w6hl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 10:00:03 crc kubenswrapper[5101]: I0122 10:00:03.787200 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82afd690-92db-4b83-aca8-d76a830c8985-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "82afd690-92db-4b83-aca8-d76a830c8985" (UID: "82afd690-92db-4b83-aca8-d76a830c8985"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 10:00:03 crc kubenswrapper[5101]: I0122 10:00:03.881770 5101 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82afd690-92db-4b83-aca8-d76a830c8985-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:03 crc kubenswrapper[5101]: I0122 10:00:03.881810 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6w6hl\" (UniqueName: \"kubernetes.io/projected/82afd690-92db-4b83-aca8-d76a830c8985-kube-api-access-6w6hl\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:03 crc kubenswrapper[5101]: I0122 10:00:03.881825 5101 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82afd690-92db-4b83-aca8-d76a830c8985-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:04 crc kubenswrapper[5101]: I0122 10:00:04.524394 5101 generic.go:358] "Generic (PLEG): container finished" podID="20417057-198a-4454-92a8-097d4ee62e5e" containerID="1aab6b10c99c614b226faa4b2d76b83eda3c5068d828cec2207b93758fe4c8c1" exitCode=0 Jan 22 10:00:04 crc kubenswrapper[5101]: I0122 10:00:04.524470 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484600-wndl6" event={"ID":"20417057-198a-4454-92a8-097d4ee62e5e","Type":"ContainerDied","Data":"1aab6b10c99c614b226faa4b2d76b83eda3c5068d828cec2207b93758fe4c8c1"} Jan 22 10:00:04 crc kubenswrapper[5101]: I0122 10:00:04.526513 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" event={"ID":"82afd690-92db-4b83-aca8-d76a830c8985","Type":"ContainerDied","Data":"b9a662490daefef6404d2966c0242b77336abb2084cb6b3fd4401cfafda21de4"} Jan 22 10:00:04 crc kubenswrapper[5101]: I0122 10:00:04.526548 5101 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a662490daefef6404d2966c0242b77336abb2084cb6b3fd4401cfafda21de4" Jan 22 10:00:04 crc kubenswrapper[5101]: I0122 10:00:04.526523 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-nbnkq" Jan 22 10:00:05 crc kubenswrapper[5101]: I0122 10:00:05.738735 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484600-wndl6" Jan 22 10:00:05 crc kubenswrapper[5101]: I0122 10:00:05.808305 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpkfh\" (UniqueName: \"kubernetes.io/projected/20417057-198a-4454-92a8-097d4ee62e5e-kube-api-access-dpkfh\") pod \"20417057-198a-4454-92a8-097d4ee62e5e\" (UID: \"20417057-198a-4454-92a8-097d4ee62e5e\") " Jan 22 10:00:05 crc kubenswrapper[5101]: I0122 10:00:05.814815 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20417057-198a-4454-92a8-097d4ee62e5e-kube-api-access-dpkfh" (OuterVolumeSpecName: "kube-api-access-dpkfh") pod "20417057-198a-4454-92a8-097d4ee62e5e" (UID: "20417057-198a-4454-92a8-097d4ee62e5e"). InnerVolumeSpecName "kube-api-access-dpkfh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 10:00:05 crc kubenswrapper[5101]: I0122 10:00:05.910112 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dpkfh\" (UniqueName: \"kubernetes.io/projected/20417057-198a-4454-92a8-097d4ee62e5e-kube-api-access-dpkfh\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:06 crc kubenswrapper[5101]: I0122 10:00:06.545149 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484600-wndl6" Jan 22 10:00:06 crc kubenswrapper[5101]: I0122 10:00:06.547388 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484600-wndl6" event={"ID":"20417057-198a-4454-92a8-097d4ee62e5e","Type":"ContainerDied","Data":"0e650f3fff9051e5c361415add6cfec7e570136f4cf3af77fbb138b03eee7090"} Jan 22 10:00:06 crc kubenswrapper[5101]: I0122 10:00:06.547458 5101 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e650f3fff9051e5c361415add6cfec7e570136f4cf3af77fbb138b03eee7090" Jan 22 10:00:13 crc kubenswrapper[5101]: I0122 10:00:13.792328 5101 patch_prober.go:28] interesting pod/machine-config-daemon-m45mk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:00:13 crc kubenswrapper[5101]: I0122 10:00:13.793180 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" podUID="8450e755-f74e-492f-8007-24e3410a8926" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:00:43 crc kubenswrapper[5101]: I0122 10:00:43.791461 5101 patch_prober.go:28] interesting pod/machine-config-daemon-m45mk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:00:43 crc kubenswrapper[5101]: I0122 10:00:43.792108 5101 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" podUID="8450e755-f74e-492f-8007-24e3410a8926" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:00:43 crc kubenswrapper[5101]: I0122 10:00:43.792176 5101 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" Jan 22 10:00:43 crc kubenswrapper[5101]: I0122 10:00:43.793147 5101 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c98fafbc0bdf5104350cd0edfe9623bd0cbd9fbe271ac3d7a333fbaeadc43f66"} pod="openshift-machine-config-operator/machine-config-daemon-m45mk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:00:43 crc kubenswrapper[5101]: I0122 10:00:43.793270 5101 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" podUID="8450e755-f74e-492f-8007-24e3410a8926" containerName="machine-config-daemon" containerID="cri-o://c98fafbc0bdf5104350cd0edfe9623bd0cbd9fbe271ac3d7a333fbaeadc43f66" gracePeriod=600 Jan 22 10:00:44 crc kubenswrapper[5101]: I0122 10:00:44.792905 5101 generic.go:358] "Generic (PLEG): container finished" podID="8450e755-f74e-492f-8007-24e3410a8926" containerID="c98fafbc0bdf5104350cd0edfe9623bd0cbd9fbe271ac3d7a333fbaeadc43f66" exitCode=0 Jan 22 10:00:44 crc kubenswrapper[5101]: I0122 10:00:44.793005 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" event={"ID":"8450e755-f74e-492f-8007-24e3410a8926","Type":"ContainerDied","Data":"c98fafbc0bdf5104350cd0edfe9623bd0cbd9fbe271ac3d7a333fbaeadc43f66"} Jan 22 10:00:44 crc kubenswrapper[5101]: I0122 10:00:44.793609 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m45mk" event={"ID":"8450e755-f74e-492f-8007-24e3410a8926","Type":"ContainerStarted","Data":"f8e27b95424b84b1f74b1eb038e2125fd6f94c0164df8864924aa395fc4c4d32"} Jan 22 10:00:44 crc kubenswrapper[5101]: I0122 10:00:44.793635 5101 scope.go:117] "RemoveContainer" containerID="e642029df1e7996644ea562837d799e64f830ec7fdd5896604ec6d0b05e56220" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.127230 5101 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484602-2pn4h"] Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.128335 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82afd690-92db-4b83-aca8-d76a830c8985" containerName="collect-profiles" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.128351 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="82afd690-92db-4b83-aca8-d76a830c8985" containerName="collect-profiles" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.128362 5101 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20417057-198a-4454-92a8-097d4ee62e5e" containerName="oc" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.128368 5101 state_mem.go:107] "Deleted CPUSet assignment" podUID="20417057-198a-4454-92a8-097d4ee62e5e" containerName="oc" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.128471 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="82afd690-92db-4b83-aca8-d76a830c8985" containerName="collect-profiles" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.128483 5101 memory_manager.go:356] "RemoveStaleState removing state" podUID="20417057-198a-4454-92a8-097d4ee62e5e" containerName="oc" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.140087 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484602-2pn4h"] Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.140413 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484602-2pn4h" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.144010 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.144283 5101 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.144789 5101 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mc72n\"" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.244388 5101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxtwm\" (UniqueName: \"kubernetes.io/projected/74e0f79b-336f-4198-9e93-3614d53bf956-kube-api-access-zxtwm\") pod \"auto-csr-approver-29484602-2pn4h\" (UID: \"74e0f79b-336f-4198-9e93-3614d53bf956\") " pod="openshift-infra/auto-csr-approver-29484602-2pn4h" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.346525 5101 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zxtwm\" (UniqueName: \"kubernetes.io/projected/74e0f79b-336f-4198-9e93-3614d53bf956-kube-api-access-zxtwm\") pod \"auto-csr-approver-29484602-2pn4h\" (UID: \"74e0f79b-336f-4198-9e93-3614d53bf956\") " pod="openshift-infra/auto-csr-approver-29484602-2pn4h" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.366684 5101 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxtwm\" (UniqueName: \"kubernetes.io/projected/74e0f79b-336f-4198-9e93-3614d53bf956-kube-api-access-zxtwm\") pod \"auto-csr-approver-29484602-2pn4h\" (UID: \"74e0f79b-336f-4198-9e93-3614d53bf956\") " pod="openshift-infra/auto-csr-approver-29484602-2pn4h" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.468760 5101 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484602-2pn4h" Jan 22 10:02:00 crc kubenswrapper[5101]: I0122 10:02:00.711581 5101 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484602-2pn4h"] Jan 22 10:02:01 crc kubenswrapper[5101]: I0122 10:02:01.247100 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484602-2pn4h" event={"ID":"74e0f79b-336f-4198-9e93-3614d53bf956","Type":"ContainerStarted","Data":"8e186707bfb0db0d9488c32aad2566793d15871d452fd0929b1c75f543012b37"} Jan 22 10:02:02 crc kubenswrapper[5101]: I0122 10:02:02.257866 5101 generic.go:358] "Generic (PLEG): container finished" podID="74e0f79b-336f-4198-9e93-3614d53bf956" containerID="03e2614d93eaadff1ca7be38e3416a5bc791eef7fa275495be7941fb3ea2bfd9" exitCode=0 Jan 22 10:02:02 crc kubenswrapper[5101]: I0122 10:02:02.257975 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484602-2pn4h" event={"ID":"74e0f79b-336f-4198-9e93-3614d53bf956","Type":"ContainerDied","Data":"03e2614d93eaadff1ca7be38e3416a5bc791eef7fa275495be7941fb3ea2bfd9"} Jan 22 10:02:03 crc kubenswrapper[5101]: I0122 10:02:03.445498 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484602-2pn4h" Jan 22 10:02:03 crc kubenswrapper[5101]: I0122 10:02:03.486309 5101 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxtwm\" (UniqueName: \"kubernetes.io/projected/74e0f79b-336f-4198-9e93-3614d53bf956-kube-api-access-zxtwm\") pod \"74e0f79b-336f-4198-9e93-3614d53bf956\" (UID: \"74e0f79b-336f-4198-9e93-3614d53bf956\") " Jan 22 10:02:03 crc kubenswrapper[5101]: I0122 10:02:03.494642 5101 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e0f79b-336f-4198-9e93-3614d53bf956-kube-api-access-zxtwm" (OuterVolumeSpecName: "kube-api-access-zxtwm") pod "74e0f79b-336f-4198-9e93-3614d53bf956" (UID: "74e0f79b-336f-4198-9e93-3614d53bf956"). InnerVolumeSpecName "kube-api-access-zxtwm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 10:02:03 crc kubenswrapper[5101]: I0122 10:02:03.587724 5101 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zxtwm\" (UniqueName: \"kubernetes.io/projected/74e0f79b-336f-4198-9e93-3614d53bf956-kube-api-access-zxtwm\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:04 crc kubenswrapper[5101]: I0122 10:02:04.271391 5101 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484602-2pn4h" event={"ID":"74e0f79b-336f-4198-9e93-3614d53bf956","Type":"ContainerDied","Data":"8e186707bfb0db0d9488c32aad2566793d15871d452fd0929b1c75f543012b37"} Jan 22 10:02:04 crc kubenswrapper[5101]: I0122 10:02:04.271526 5101 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e186707bfb0db0d9488c32aad2566793d15871d452fd0929b1c75f543012b37" Jan 22 10:02:04 crc kubenswrapper[5101]: I0122 10:02:04.271407 5101 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484602-2pn4h" Jan 22 10:02:08 crc kubenswrapper[5101]: I0122 10:02:08.743816 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 22 10:02:08 crc kubenswrapper[5101]: I0122 10:02:08.743820 5101 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134372752024456 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134372752017373 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134371111016502 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134371112015453 5ustar corecore